1 Introduction

Trust in science is gaining increasing philosophical attention, particularly regarding areas of public interest. Problems such as public health emergencies or climate change raise questions concerning the kinds of policies current science supports and their effectiveness, given varying degrees to which the public is likely to follow such policies. While several philosophical analyses of trust have been brought forward, typical discussions regarding trust in science, such as debates over the role of values, have mainly focused on epistemic aspects (see review by Reiss & Sprenger, 2020; Baghramian & Caprioglio Panizza, 2022: Sect. 5). At the same time, there have been strands of research introducing notions of trust in science beyond (mere) reliability (Wilholt, 2013; Bueter, 2021). Adding to the latter strand of research, this paper will look at the relation between non-epistemic values (particularly, moral ones) and trust in the case of public health. I will proceed from a thick notion of trust, which includes a moral component in addition to the epistemic one. I will do so by drawing on analyses of warranted distrust and incorporating concerns about justice. I will argue that in order to build and maintain public trust, scientific decisions within public health should take into account specific values, especially justice. This argument will also highlight the role of trust and trust-conduciveness in deciding which value influences in science are legitimate, a question which defenders of the value ladenness thesis often leave unanswered (see Holman & Wilholt, 2022; Ludwig, 2023). At the same time, I will stress the need to balance the influence of non-epistemic values such as justice with epistemic conditions for trust. This is particularly important given the close connection between public health and social justice and the lack of clarity regarding what this connection implies for decision-making (Smith, 2022). My arguments will rely on considerations from public health ethics regarding positive impacts of social justice on health outcomes, and the negative impact of distrust (Kass, 2001; Venkatapuram, 2022).

The paper will be organized around three main questions: whether it is possible to include concerns about trust-conducive values in public health (Sect. 2), why this is important (Sect. 3), and how it can be done (Sects. 4 and 5). More specifically, in Sect. 2, I will connect discussions over the role of values in science with philosophical analyses of trust, making the case for a thick concept of trust, incorporating justice. In Sect. 3, I will provide a consequentialist argument for the importance of thick trust in public health. In Sect. 4, I will discuss views on health justice, opting for an account from public health ethics to guide decisions such as hypothesis or model choice. In Sect. 5, I will illustrate the proposal of analyzing hypotheses, approaches, concepts, or evidence used in public health through concerns about justice by investigating the case of cardiovascular disease prevention and the distinction between population-based and individual-based (or agentic) interventions.

2 The debate over values in science and trust

Looking at the issue of trust in science requires bringing together debates over scientific objectivity from the philosophy of science and investigations of trust from social epistemology. In what follows, I will review both defenses of value-freedom and arguments for value-ladenness which have a bearing for trust. Siding with the latter, I will then proceed to see how different accounts of trust and especially justified distrust rely on non-epistemic values. My aim here is not to provide a novel account of trust in science, but to identify a conceptual toolkit from these debates to be later employed for the case of trust in public health.

In the philosophy of science, trust has typically been discussed in relation to the influence of values in science. These discussions have usually focused on objectivity, and objectivity has been linked with the value-free ideal. Thus, trust has mainly been understood in relation to the objectivity and reliability of science (e.g., Reiss & Sprenger, 2020). Defenders of this view hold that decisions such as accepting or rejecting a certain hypothesis or theory or allowing the use of particular types of evidence by scientists should not be influenced by non-epistemic values. This position presupposes a distinction between epistemic or cognitive values such as accuracy, simplicity, empirical adequacy and values that have a normative valence– social, moral, political etc. In Longino’s terminology, the former are deemed constitutive of science, while the latter are classified as contextual, i.e., belonging ‘to the social and cultural context where science is done’ (1987: 54).Footnote 1 Henceforth, I will use the term ‘non-epistemic values’ to refer to values that have been described as contextual or normative in the sense above. Eliminating or minimizing influences from such values on processes central to scientific inquiry would ensure that science aims at the truth as opposed to various political, economic, or social agendas. Insofar as trust involves the belief that the trusted part is telling the truth, the value-free ideal appears to provide the relevant conditions. As Reiss and Sprenger highlight, ‘scientific objectivity and trust in science are closely connected. Scientific objectivity is desirable because to the extent that science is objective we have reasons to trust scientists, their results and recommendations’ (2020: 7).

Given that the views described above tie trust to scientific objectivity understood as meeting certain epistemic criteria independent from non-epistemic values, they assume a concept of trust as (mere) reliance. Discussions that focus more explicitly on trust in science, such as that by Koskinen, adopt a similar take: ‘trust [is] something that we have towards people, and perhaps groups or communities, but not processes or results. We can rely on a process, but as a process cannot betray us, we cannot trust it’ (Koskinen, 2020: 1193).Footnote 2 John (2021) also focuses on epistemic trust when discussing value choices, holding that one may also refer to this sense of trust as ‘reliance’. Boulicault and Schroeder (2021) use a similar account of trust (namely, whether the public is likely to accept scientific claims as true), deeming it the most relevant notion for discussions of trust in science in contexts beyond philosophy (also see Schroeder, 2021). In their review, Reiss and Sprenger (2020: 7) also add institutions to the list of things people rely on, but do not necessarily trust. The distinction is important because, while the philosophical literature on trust includes reliance as a component of trust, it also broadly agrees that something beyond reliance is needed (McLeod, 2021). While investigating various philosophical accounts of trust is beyond the purposes of this paper, it is worth pointing out that there are views that add a moral component, such as goodwill (Baier, 1986). Thus, at first glance, it appears that reliance in the sense of providing objective results suffices for trusting science. This is also in line with what Irzik and Kurtulmuş (2019) deem the basic concept of warranted trust in science, which focuses on the reliability of the processes that produce scientific outcomes.

Yet, this picture is open to several critiques, coming both from different perspectives on the scientific objectivity debate and from accounts of trust specifically. Starting with the former, defenders of the value-ladenness thesis broadly hold that value influences are inevitable, and as such, the value free-ideal is unachievable. According to various arguments in favor of value-ladenness, since values do interfere with processes such as hypothesis testing and evidence assessment, it is better for scientists to be transparent about value choices (e.g., Douglas, 2009; Elliott, 2017). Particularly relevant here is Longino’s (2002) view on objectivity. Longino criticizes the individual-centered concept of objectivity associated with the value-free ideal. The concept of objectivity she introduces involves openness to transformative criticism from a diverse scientific community. While this concept of objectivity is meant to address epistemic issues, such as the problem of underdetermination, it is also committed to non-epistemic values such as equality of intellectual authority (2002: ch. 6). On this view, scientific objectivity has a broader range of applicability, as opposed to focusing on the perspectives of groups with higher prestige or power. Looking at consequences for trust, if abiding by the value-free ideal leads to the constant exclusion of perspectives from specific groups, as Longino and other defenders of value-ladenness argue, the said groups would be justified in distrusting science. The grounds for this are epistemic as well as ethical and political, as research exhibiting blind spots regarding particular individuals or groups yields worse results in terms of the relevant knowledge and interventions. One example here is gender bias in research, that has led to the neglect of illnesses such as endometriosis. The reasons for this neglect are partly because research on women’s health has not been perceived as profitable but also because medical professionals often dismiss women’s pain (Amin et al., 2022: 273).

The discussion so far suggests that non-epistemic values, particularly those highlighting perspectives from discriminated groups, may help contribute to increasing trust.Footnote 3 Still, there is a further argument to be made, originating specifically in the case of public health. Goldenberg’s (2021) work on vaccine hesitancy shows that people’s decisions with regard to trusting doctors or public health authorities are determined by non-epistemic values rather than by their knowledge of how vaccines work, as typically thought. Framing vaccine hesitancy as a problem that involves an ignorant public going against experts does not help address it. Rather, Goldenberg argues, dealing with vaccine hesitancy requires restoring trust. This can be done through different strategies of communicating with the public, as well as through different kinds of interactions between doctors and patients expressing concerns about vaccination. While Goldenberg refers to public engagement and policy, it is also possible for decisions by scientists themselves to undermine trust.Footnote 4

This suggests that there are good reasons to think of trust as a thicker concept than mere reliance, and to investigate it beyond its epistemic dimensions.Footnote 5 My position in this paper is that trust in science does not only involve procedures that need to be reliable, but also institutions and a scientific community. One such example is the enhanced concept of warranted trust in science, which requires alignment between the scientists’ and the public’s values (Irzik & Kurtulmuş, 2019). While taking a different route, my argument is compatible with this analysis insofar as concerns about justice (broadly construed at this point) are important for both scientists and the public.Footnote 6 Against the views focusing only on reliance, I hold that trusting institutions and the scientific community involves additional requirements, and similar conceptualizations can be used as in cases where one may simply not rely on an individual, without necessarily deeming the said individual unworthy of trust.

Another thick concept of trust in science is introduced by Bueter (2021). Discussing patient distrust in the Diagnostic and Statistical Manual (DSM) classification of mental disorders, Bueter holds that trustworthy classification practices should take the patient’s values into account while also conveying this to the public (2021: 4712). Bueter connects trust to procedural objectivity, which focuses on the methods and the structure of the scientific inquiry, making a case for the participation of patients in the process of revising psychiatric classifications. The view I defend here is in agreement with much of this, particularly the claim that ‘public epistemic trustworthiness requires that value-laden decisions are made in the public’s best interest and are representative of the public’s values’ (2021: 4717). However, my argument focuses on value choices more than on objectivity, and on public health interventions rather than classification. On the view I defend, participation comes in when spelling out the procedural aspect of justice alongside the distributive one.

A thick concept of trust is not necessarily tied to procedural objectivity, as shown by Jukola’s (2017) critique of procedural objectivity in biomedical research. Jukola suggests the critical discussion of goals and methods, which also involves values, through Douglas’ (2004) concept of ‘interactive objectivity’. At the same time, Jukola emphasizes that values should not replace evidence. I will make a stronger claim in this regard: that assessing specific models or hypotheses tied to public health interventions through both evidence and values (particularly, justice) can help increase trust.

A further critique of procedural approaches is that unequal power balances and patterns of epistemic injustice may prevent those belonging to subordinate groups from bringing genuine input to the conversation in attempts to increase citizen participation (Rolin, 2021). Defending social responsibility as a condition for trust in addition to epistemic requirements, Rolin suggests that scientific/intellectual movements can help. Although my focus here is not on citizen science, the account I will defend is not incompatible with these insights: such movements could contribute to both components of justice I will single out in Sect. 4.

Going further into depth about the concept of trust and whether it applies to individuals, institutions, or both, take the following two cases:

  • X needs help, but does not ask their best friend because they know they are currently very busy.

  • Y has just lost their job, but does not apply for benefits, being confident they will find employment soon.

Both are examples of not relying on an individual or an institution, respectively, but not because of distrust. Contrast this with the case where X and Y have been repeatedly let down by the said friend or institution and decide not to rely on them. Unlike the instances above, these would be genuine cases of distrust. What trust requires exactly is subject to philosophical controversy, on which I will take a stance in the following.

Focusing on trust in science, it is important to note that trust and distrust are also discussed in the literature on epistemic injustice. Fricker’s (2007) introduction of the term as a way of highlighting injustices stemming from tying credibility to one’s social standing has led to discussions of further examples and cases. For instance, Grasswick (2017) brings forward the term ‘epistemic trust injustice’, holding for cases where ‘due to the forces of oppression, the conditions required to ground one’s trust in experts cannot be met for members of particular subordinated groups’ (Grasswick, 2017: 319). This highlights the connection of both trust and distrust to epistemic injustice. The distinction between trust and distrust, while exclusive, is not exhaustive, as it is possible to neither trust, nor distrust someone (e.g., Hawley, 2014). The concept of trust used by Grasswick points to a connection I will explore below: that of warranted distrust and patterns of injustice that members of subordinated groups have been subjected to in the past, continuing in the present. Relatedly, earlier work by Scheman (2001) holds that ‘the credibility of science suffers, and, importantly, ought to suffer (…) when its claims to trustworthiness are grounded in the workings of institutions that are demonstrably unjust– even when those injustices cannot be shown to be responsible for particular lapses in evidence gathering or reasoning’ (Scheman, 2001: 36). This captures the features of the thick concept of trust discussed above: even when epistemic conditions are met, distrust can be warranted by failures to meet conditions regarding justice. These insights also help explain my choice of a politically charged account of trust in what follows. Many discussions of trust in science look at instances where the spread of misinformation, conspiracy theories, or skepticism about science drive distrust (see review in Ludwig, 2023: Sect. 1). Yet, focusing exclusively on these cases neglects the political underpinnings of particular groups having legitimate concerns whether science truly works in their interests. This is consistent with the proposal of promoting trust in science through more than combating anti-science populism, striving for political integrity within science (Ludwig, 2023: Sect. 2).

Before moving on, I will address a potential objection: whether the thick concept of trust I advocate for is needlessly tied with moral requirements. Bennett brings forward an analysis of trust independent from moral motivations, defining trust through commitment, i.e., ‘when we trust we expect that the person trusted will act as we wish them to because they have a commitment to something– an action, goal, value, project, other people, etc.– that motivates them to act this way’ (2021: 512). My answer to this is that even if one ‘demoralizes’ trust in Bennett’s way, defining it as commitment, some values or social practices still come into play. For instance, there are actions that may strengthen or undermine the commitment, even if they are not cast in moral terms. To put it another way, Bennett’s account of trust as commitment is not as thin as it may appear on a first glance– it may do away with moral requirements, but not with normative aspects altogether. A further issue is that Bennett focuses on interpersonal trust, whose requirements may not be identical to trust in groups and institutions (though see Bennett, 2024 for an expansion of the notion to groups). When discussing trust in science, the commitment between scientific communities and society involves values, justice being an important one within a democratic setting.Footnote 7

I will now explore a concept of trust encompassing both epistemic and non-epistemic values for uses with regards to science. While philosophical analyses typically start from defining trust, then proceed to drawing consequences about distrust, I take the opposite route here. Specifically, I draw on Krishnamurthy’s (2015) work on the democratic value of distrust. Krishnamurthy (2015) sets the necessary conditions for distrust as follows: ‘In order for A to distrust B, A must have a confident belief that B will not act justly’ (2015: 392). A particular feature of this approach is that it seeks to be politically useful and valuable, unlike bringing forward conceptual analyses competing over various criteria. I take this to apply also to cases involving trust in science, as will be discussed below. Another important characteristic for the purposes here is that distrust can apply to individuals qua individuals, but also to institutions, or to individuals qua representatives of institutions (2015: 396). Krishnamurthy discusses the democratic value of distrust in the context of the Civil Rights movement in the United States, particularly Martin Luther King Jr.’s letters expressing distrust towards moderate whites, who failed to act against segregation.

Given the insights regarding the connection between distrust and injustice, and its potential to contribute to democratic causes, I will henceforth use a concept of trust as reliance plus commitment to justice on the part of the trustee (which can be an individual, a group, or an institution). This concept of trust is derived from distrust as defined by Krishnamurthy. On this view, even if institutions or research groups produce reliable knowledge or technologies, they can be the subject of legitimate distrust if they overlook the interests of oppressed or marginalized groups. Relevant here is the above-mentioned example of insufficient research and late diagnosis of endometriosis due to gender bias, which can lead to legitimate distrust of medical research among women, since it does not prove that it is taking their interests into consideration. Further examples can be found in feminist philosophy of science, particularly the emphasis of applicability to current human needs such as improving material conditions and its opposition to a preference for theories or hypotheses that can be used for political domination (Longino, 1995). Making scientific institutions or research groups trustworthy would involve democratizing their operation, and widening the range of interests and needs they take into account. This is in line with procedural aspects of justice to be covered in Sect. 4. Krishnamurthy’s discussion highlights the importance of distrust in motivating political action against oppression. Looking at the specific case of science, political action can also stem from distrusting particular practices in which institutions engage. Relevant examples include ACT UP advocating for heterogeneous trials, which include a broad range of social groups, for testing new drugs for treating HIV infection (Epstein, 1996). Yet, distrust can also lead to lack of compliance with scientific advice, diminishing the benefits that oppressed groups can gain from science. Vaccine hesitancy among vulnerable groups that would benefit from vaccines, mentioned above, is one such example. My interest here lies in how legitimate distrust in science can be countered through making science more trustworthy by aligning the scientific activity with values which are constitutive of trust– i.e., justice.

Before moving on, one thing to clarify is that the conditions above are not exhaustive, as my goal is not to provide a complete account of trust. More specifically, they highlight the ethical and political dimensions of trust in science in addition to the epistemic one. A complete analysis would also need to take into account whether the public knows about science being trustworthy in the sense above. This involves questions of scientific communication that go beyond the scope of the paper.Footnote 8 Nevertheless, one thing to emphasize for my purposes here is that the knowledge condition highlights two undesirable possibilities: that science is trusted without being trustworthy or that science is not trusted while being trustworthy. Since my view emphasizes trustworthiness and the moral and political conditions for it, it rules out cultivating blind trust or focusing only on people’s perceptions of trustworthiness as it would happen in the former instance. The latter instance shows that even if one ensures trustworthiness, there is further work to be done on how to communicate this to the public. As mentioned, this is a topic for another paper, but given my focus on distrust, one contribution this paper is making in this regard is to single out that complicity in historical injustices supplies people with good reasons for distrust (cf. Grasswick, 2017).

A further question that may arise is how long it would take from abiding by the requirements of such a concept of trust to countering problems such as vaccine hesitancy or skepticism about established scientific findings. My answer is that such effects would not be noted right away, as trust takes time to build, but science would become more trustworthy in the long term. Building trust in the sense above can also contribute to what has been defined as ‘climate of trust’, i.e., ‘a social and political environment where the concerns that motivate and legitimise distrust are acknowledged and, to the extent that is possible, addressed and where legitimate trust is allowed to flourish’ (Baghramian & Caprioglio Panizza, 2022: 17).

Another question, or potential objection can be raised here: what if the moral conditions for trust (i.e., justice) conflict with the epistemic ones, thus undermining trust in science? For example, what if scientists choose to research a treatment that can be easily accessible to the population on insufficient evidence regarding its efficacy? My answer is that choosing hypotheses or approaches that are better aligned with justice does not entail discounting available evidence and procedures for obtaining knowledge reliably. Rather, both aspects are considered when deciding what hypothesis to accept, or what approach to use. This is compatible with an overall view where concerns about justice are part of the purposes of inquiry of public health as a normative discipline, as I will discuss in the following section (e.g., Venkatapuram, 2022). This is also in line with various versions of pragmatism, according to which scientific claims are shaped by the purposes of inquiry (e.g., Chang, 2022). In Sect. 5, I will illustrate how this works for public health with the example of cardiovascular disease prevention.

Thus far I have reviewed discussions of scientific objectivity that connect trustworthiness to non-epistemic values in addition to reliability. Following these contributions, my argument will also help address a critique of the value-ladenness thesis and of the science and values research program more broadly: that it does not specify how to decide which values are politically legitimate (Ludwig, 2023). Adopting a thick concept of trust provides one way of deciding which values to incorporate when testing hypotheses or assessing evidence: the trust-conducive ones, with justice as one relevant example. The argument from trust can be articulated as follows: since the influence of values on science is inevitable, one guiding factor in the process of establishing which values should influence the decision-making process is how the chosen values contribute to trust. The next two sections will explain why this is desirable, and how it can be implemented drawing on public health ethics.

3 Justice, trust, and health outcomes

Before introducing my argument in connection to public health, I will make several clarifications. One question is whether focusing on trust-conducive values means that trust in science is always a good thing (i.e., even blind trust). In response to this, I emphasize again that I am focusing on thick trust, which requires acting in accordance to the public’s interests and values and not merely seeming trustworthy. To put it another way, I take trust-conducive values to be important not (only) because of the instrumental value of trust, but because they would ensure that science responds to human needs and interests.

Another question is how are scientists to determine which values are trust-conducive. The answer here is partly conceptual and partly empirical. Above, I have drawn on philosophical analyses of trust to single out values characteristic of trust, particularly justice. Empirical work can help corroborate this (see studies on distrust among groups that have experienced injustice, such as Laurencin, 2021) or cast doubt onto it. This works well together with my focus on public health, where empirical studies on health effects of social justice are of utmost importance. A broader concern here is how to determine which values are representative for the public at large, and public debate has been suggested as a means of doing that (see Douglas, 2005; Schroeder, 2021). As my focus here is mainly on justice and its importance for trust in public health, I will not discuss this further, though the absence of satisfactory public debates is likely part of the explanation of the perpetuation of injustice and subsequent distrust in public health institutions or recommendations.

One more concern is whether societies or individuals abiding by racist, sexist, or other harmful values would find these values to be trust-conducive. This issue parallels Schroeder’s (2021) discussion in connection to the proposal of increasing trust by having scientists follow the values of the public. I will not engage with this approach in depth, since my focus is not on increasing (epistemic) trust in science through public participation. Nevertheless, insofar as trustworthiness in the sense I discuss above requires common moral ground between scientists and the public, this is still a concern. In response, I point out that the conceptual approach introduced above points to the contrary– acting against social justice (as racist or sexist norms would have it) is a driver of distrust and distrust is valuable precisely because it helps draw attention to these issues. Furthermore, as I will discuss below, race-based oppression has been singled out as a ‘fundamental cause’ of disease in public health (Link & Phelan, 1995). Thus, acting in accordance with such ‘values’ would undermine rather than help pursue the goals of public health. A side point here is that while Schroeder (2022) argues that these ‘values’ should be filtered out because they would otherwise undermine democratic legitimacy, I argue that they would undermine the aims of public health. These points can come together if one views the aims of public health as best met within a democratic setting.

I will now make a case for including trust-conducive values in public health decision-making by connecting issues of trust and social justice to broader patterns regarding health outcomes. Drawing on contributions from public health ethics backed up by empirical studies on the connection between economic inequalities and health, I will explain how social justice yields better overall health outcomes, with trust likely acting as a pathway to that. This will help open the way for more specific claims regarding how public health can promote social justice. While various normative views can be used here, I will employ a consequentialist argument drawing on findings linking social justice, trust, and positive health outcomes overall, also singling out the connection between distrust and unsuccessful interventions. One concern to address before discussing public health ethics is whether my scope spreads too far given the focus on philosophy of science and social epistemology in the previous section. To alleviate this worry, I emphasize that I am using contributions across different areas as conceptual tools to address a question about value influences and trust in public health. Thus, I am weaving together specific arguments and concepts from these areas, as opposed treating any of them in an exhaustive manner. This leaves open the possibility of criticism pointing out weaknesses within the specific views and approaches I am using. Nevertheless, as my overarching goal is to build a framework to think about values and trust in public health, finding concepts and approaches that work better can be part of future research. While in Sect. 2 I have shown how trust can be connected with particular value choices, more specific ethical and political contributions are needed to clarify what values amount to in the context of public health. This investigation can be subsumed under contributions that bring employ political philosophy or ethics to develop the strand of research on science and values (e.g., Anderson, 2004).

Moving on to public health ethics, it is worth emphasizing its wider reach compared to accounts in applied ethics aiming to provide guidelines and lists for quick decision making in biomedical contexts. As pointed out by Venkatapuram, ‘a (…) problematic aspect of such lists is that they make “ethical assessment” perform an assistance function to public health policymaking, which is inconsistent with the fact that public health is fundamentally a normative discipline, aimed at furthering a presumed moral good (that is, a public’s health)’ (2022: 80). This is important since the scope of my argument goes beyond policymaking, to decisions by scientists. If public health is a normative field, then its ability to increase health as a moral good is an indication of its success. In this context, my argument about values will draw on public health research to show that trust-conducive approaches yield better health outcomes.

Public health ethics comprises various accounts, corresponding to the main normative theories, as well as to areas of applied ethics (Venkatapuram, 2022). For my purposes here, I rely on consequentialism, while also noting that future research can explore other ethical approaches in the context of values in public health. This is because consequentialist arguments are often invoked when it comes to public health decision making, but not always with straightforward or satisfactory conclusions. The example of lockdowns during COVID-19, and the difficulty of assessing whether preventing new infections outweighs social and economic harms with effects on health is illustrative in this sense. The lack of research on alternative approaches for low-income countries, where economic harms were likely to be greater, shows that consequentialist principles have not been used consistently. Within the consequentialist framework, I would particularly like to emphasize consequences for public trust. While a particular intervention can be assessed by its immediate effects on preventing illness, the picture becomes more complex when trust is taken into account. A relevant example are vaccine mandates: while the number of people contracting a specific illness can be decreased by making vaccination mandatory, that is also likely to decrease public trust, particularly if it is imposed from above, without deliberation (e.g., Bardosh et al., 2022). If more people become hesitant about vaccines or other public health recommendations, then health outcomes can deteriorate over time.

A potential explanation why consequentialist framings have failed to take into account perspectives from marginalized or discriminated groups is that, at least in the form of utilitarianism, it would allow for particular individuals or groups to be less well-off if the overall level of welfare is higher than in more egalitarian configurations. This is a critique that Hansson raised in the context of risk analysis, particularly situations when decision makers and those who benefit are not the same as the risk takers (2017: 1826).Footnote 9 A similar point can be applied to public health, say, if those with secure employment or the ability to access welfare design policies leaving others, such as informal workers, to take higher risks. Yet, the case of public health also shows that even if welfare may be maximized in the short term, the effects on trust will become visible in the longer term.Footnote 10 I will further expand this point to justice and distributing risks across different subpopulations. I should note that a deontological case may be made here as well, namely that the goals of public health are not achieved if some people are denied basic welfare, and this is in line with Hansson’s suggestion of drawing on deontology. Yet, since I am focusing on consequentialism here, my point is that even if one only looks at consequences, warranted distrust may undermine public health interventions’ success by lowering rates of compliance or alienating parts of the public.

At this point, it is worth distinguishing between trust in interventions (e.g., a vaccine mandate), institutions (bodies that assess vaccine safety or that issue vaccination recommendations) and specific individuals. The case of public health is a good illustration of the limitations of views holding that (dis)trust only applies to individuals: such views are unable to account for the pervasive effects of distrust. If distrust were to apply only to individuals, then restoring trust and ensuring the success of future interventions would simply amount to assigning different people to make health recommendations. Nevertheless, that is hardly the case, as shown by distrust in particular interventions and institutions, both of which are my focus here. Distrust in interventions, such as vaccination programs, can persist over time, while for institutions the consequences are even wider reaching, with potential to spill over to other interventions or recommendations connected to the distrusted institution.

In this context, connections to work on institutional epistemology can also be drawn (e.g., Andersen & Wagenknecht, 2013; Rolin, 2015). Although these approaches discuss trust within scientific communities, epistemic dependence also applies to the relation between the public and the scientific community: given the complexity of scientific research it is impossible for any individual to check on each and every finding. The kind of trust arising in these cases also involves moral components (Hardwig, 1985). While in scientific groups these can be spelled out in terms of research ethics, in interactions between scientific groups and the public these can be spelled out by the commitment of the former to the public’s best interests (in this case, health). Similarly, in the context of scientific communities, Rolin states that ‘some moral and social values should be permitted to play a role in acceptance because they are woven into the epistemic fabric of scientific collaboration’ (2015: 173). I extend this to the collaboration between the scientists and the public needed for the success of public health interventions: trust-conducive values are those that enable this cohesion. While I hold that these values should be involved in decision-making, I do not take them to be epistemic, since the goal of improving health can be described as moral or social, thus going beyond simply seeking the truth.

Looking at relevant empirical aspects from the perspective of public health ethics, there are two main points regarding the relation between trust and successful approaches. First, trust conducive approaches improve public health outcomes. Second, approaches that give rise to legitimate distrust lead to worse outcomes. I will explore each of these two points.

Regarding the positive effects of trust on public health and the connection to social justice, Childress et al. point out that ‘social injustices expressed in poverty, racism, and sexism have long been implicated in conditions of poor health. In recent years, some evidence suggests that societies that embody more egalitarian conceptions of socioeconomic justice have higher levels of health than ones that do not’ (2002: 176–177). Notably, a review of relevant studies by Pickett and Wilkinson (2015) points to causal links between income inequality and worse overall health outcomes. One of the explanations of these findings describes inequality as a social stressor through its negative effect on trust (2015: 322–323). This is a particularly interesting case for utilitarianism. Unlike in cases where a greater degree of welfare can be maintained by keeping some worse-off, as discussed above, these findings show that greater social justice and equality yield into greater welfare overall. There are two implications for public health and its normative dimensions here. In a more narrow sense, looking at public health specifically, one can stress the need to take into account questions of justice and (in)equality, otherwise there is a risk of exacerbating existing health disparities. Interventions that leave out or place a disproportionate burden on marginalized or discriminated segments of the population, as mentioned in the case of COVID-19, can feed into this loop, ending up with worse health outcomes for everyone. In a wider sense, one could further hold that the goals of public health go further, to supporting interventions that promote social justice beyond strictly medical ones. This is the route highlighted by Kass (2001).

Kass has stressed that ‘it is hard to find a more powerful predictor of health than class and it is thus an appropriate, if not obligatory, function of public health to reduce poverty, substandard housing conditions, and threats to a meaningful education—if for no other reason than to reduce the incidence of disease’ (2001: 1781). The reference to social determinants of illness, including income, living conditions, and education is worth noting here. Insofar as social justice requires equality of opportunity as well as distributive aspects about material well-being and these influence health, they are relevant to public health. Furthermore, Kass notes that there should be at least instrumental reasons for improving social justice, namely its contribution to the improvement of health. This is in line with the consequentialist framework. Insofar as social justice is among the conditions for trust, as discussed previously, a connection can be drawn to effective public health research and policy-making. Analogous points are made by Marmot (2015, 2022), who focuses on six areas to reduce health disparities: childhood development and support, education, work conditions, sufficient income, healthy living and working spaces, and focus on social determinants in disease prevention.

Regarding negative effects of approaches that disregard trust, Venkatapuram has pointed out that ‘public health without coherent and transparent ethical reasoning risks being (and frequently is) rejected by the “public” it seeks to serve, whether in free and democratic societies or otherwise’ (Venkatapuram, 2022: 71). Here, the lack of transparency, and, more broadly, of an explicit ethical decision-making process are spelled out as further drivers of distrust. These problems are particularly important in contexts where warranted distrust is already present. The case of higher rates of vaccine hesitancy among historically discriminated groups is a good example, with the respective groups being more exposed to harm within a pandemic context (Laurencin, 2021).

The implications of findings such as those discussed above and their connections to value concerns introduced earlier can be better integrated by expanding the scope of the value-ladenness thesis. Relevant here is Russo’s prospective approach, holding that ‘concepts and methods influence the values we promote in the interventions’ (2021: 8).Footnote 11 An example would be conceptualizing health as purely biological. This conceptualization would entail only biological treatments, excluding, say, interventions to address social problems that sustain illness. By contrast, a positive conception of health including mental and social well-being in addition to the absence of disease (see Valles, 2018: 58) would also have implications for other policies (say, social or economic) relevant to health. This is a stronger claim than the value-ladenness thesis insofar as it does not only acknowledge that values have been part of the process leading up to a specific concept or approach, but further, that the employment of a particular concept or approach will also promote specific values. Extending Russo’s claim, I hold that particular choices of concepts or methods can be trust-conducive depending on the values they promote. Bringing this together with the thick concept of trust discussed above, one way of increasing trust in science would be through employing trust-conducive concepts, methods, or approaches.

Two potential problems arise here. The first is whether the findings reviewed above establish a connection between social justice and health outcomes, or rather, between more egalitarian conditions and health outcomes. Answering this requires further clarification regarding what justice amounts to, particularly in relation to public health. This will be addressed in the following section, by opting for a concept of justice that incorporates distributive aspects. Insofar as justice involves a fair distribution of available resources, as well as benefits and burdens, conditions of equity such as those discussed above will be part of it.

Secondly, there is what may be deemed an objection from the division of labour against my argument, as presented so far. This would hold that while concerns about justice and trust have important implications for health, the decisions are up to policy-makers, while the scientists are chiefly concerned with the epistemic side of things. On this view, public health should provide the best available research and leave choices regarding justice to politicians. There are several ways of answering this. One route is to resort to defences of the value-ladenness thesis, such as Longino’s mentioned above: neglecting concerns about justice, particularly with regard to discriminated or marginalized groups, leads to less reliable scientific findings. Even if scientists only provide policy-makers with options, they can contribute to public distrust by overlooking more just or equitable possibilities. Another route is to point out that science and policy do not work in isolation. Teams deciding what to do in cases such as public health emergencies comprise both scientists and policy-makers, and scientists present their findings within a specific political and social context, and not from outside of it. Lastly, when looking at empirical work on public health issues, such as vaccine policies, distrust in science and distrust in government go together (Bardosh et al., 2022: 6). This indicates not only that the public perceives the scientists’ decisions as driving policy, but also that avoiding talk of values from the side of scientists may not work to increase trust in science, particularly if scientists are viewed qua representatives of the institutions that the public distrusts. Having made a case for decisions in line with justice not only at policy level, but also within science, I will now look at how justice can be understood from the perspective of public health research, and how it can influence decisions about hypothesis, evidence, or model choice.

4 Public health and justice as a trust-conducive value

So far, I have argued for trust-conducive value influences in public health decisions on the ground that social justice is conducive to better health outcomes. A remaining question is how to define social justice for public health contexts. I will now explore available views on health justice focusing on distributive and procedural aspects relevant to my argument. Again, my aim here is not to provide a new approach to health justice, but to see which insights from current approaches can be used to spell out the link between trust and health outcomes.

Social justice has always been central to public health concerns. At the same time, issues such as health inequities have been presented as neutral due to the separation between public health and social justice issues (Smith, 2022). This problem can also be associated with the value-free ideal, strengthening the case for investigating the role of justice and other values in fostering trust. To clarify how this works in the case of public health, consider again the example of biological versus more encompassing conceptualizations of health (including social or psychological factors). Applying the former concept to problems such as public health emergencies can lead to the neglect of mental health (or the overemphasis of pharmaceutical approaches for such problems), as well as of social determinants of illness. This can further exacerbate previous inequities, as it can be concluded from examining the effects of the COVID-19 pandemic on those with previous mental health conditions or on disadvantaged groups.Footnote 12 By contrast, using a concept of health that also considers social and psychological aspects can help make trade-offs clear and, when no alternatives are available, prompt additional interventions to prevent further harm to vulnerable groups.

The example above brings me to the issue of understanding justice in the context of public health. Since an exhaustive analysis is a task for another paper, for my purposes here I draw on a review by Smith (2022), distinguishing several proposals:

  • Views calling for a fair distribution of benefits and burdens in relation to public health. This includes distributive aspects regarding resource allocation and procedural ones regarding participation. These approaches are grounded in public health ethics and also emphasize social determinants of illness stemming from structural injustices (Kass, 2001; Childress et al., 2002).

  • Relational approaches focusing on fair access to social goods related to health - e.g., opportunities, power (Kenny et al., 2010).

  • Views focusing on equality, holding that health is a special good to be protected given its contribution to equality of opportunity (Daniels, 2007) or that justice requires compensating for differences in equality of opportunity due to bad luck, including health (Segall, 2009).

  • Capabilities approaches discussing the ability to be healthy as an ethical entitlement enabled by justice. Different versions of these approaches are confined to health policy (Ruger, 2010), while others take health justice to include concerns about the social determinants of health (Venkatapuram 2011).

  • Views focusing on well-being that emphasize the need to provide everyone with sufficient health alongside other components of well-being and equal opportunities for accessing a minimum amount of well-being (Powers & Faden, 2019).

While all of the views above touch upon topics relevant to this paper, I draw mainly on the first group of views. This is because they are consistent with the ethical argument deployed above, i.e., that trust is central to achieving the purposes of public health. At the same time, I hope this analysis can open the way for further investigation on the basis of other notions of justice. Bringing questions regarding values together with a view on justice grounded in public health ethics implies that scientists should analyze hypotheses, evidence, concepts and approaches available in terms of (a) their potential to support a fair distribution of benefits and burdens, and (b) their potential to include perspectives from all the relevant actors. Condition (a) focuses on distribution understood in terms of resources which could be material, social, or psychological, as well as risks taken. For instance, tying access to a basic health service to paying a fee is not fair in this sense, as it takes a higher toll on those least well-off economically, who are less likely to afford it compared to individuals in better economic situations. Condition (b) focuses on how likely specific approaches are to give equal consideration to the interests of those affected as opposed to focusing on the interest of more powerful groups. An example going against this condition would be shaping health advice according to the specifics of particular individuals, not taking into account issues such as gender differences in heart disease symptoms. After conducting this analysis, the methods, types of evidence, concepts, or approaches that have the highest potential to contribute to this concept of justice should be chosen. As commitment to justice is necessary for the thick concept of trust discussed above, this would help foster trust.

In sum, I have argued that public health can incorporate trust-conducive values in decision making by analyzing hypotheses, models, background assumptions, or evidence through their contribution towards a fair distribution of burdens and benefits and the participation of all relevant parties. Bringing this together with the points in the previous sections, my argument can be summarized as follows: since social justice leads to better health outcomes and public health can contribute towards a more just decision-making processes, which are also likely to increase trust, doing so would be in line with the aims of public health. The final section will show how the analysis proposed here can work on a case study.

5 The case for taking health inequalities into account in cardiovascular disease prevention

I will now use the case of cardiovascular disease prevention to illustrate how the distributive and procedural components of justice discussed above can be assessed in relation to specific public health interventions. I start by looking at the efficiency of population-based interventions in improving overall health as well as addressing disparities according to Rose’s (1985) work in epidemiology, showing that these types of interventions meet the criteria for justice. At the same time, there remain further questions about population-based interventions not always meeting such criteria.

The distinction between prevention in individuals and populations drawn by Rose (1985) has been influential in epidemiological research. Rose’s original discussion made a case for population-based prevention through changes in lifestyle instead of simply identifying high-risk individuals and providing them with medication (statins and anti-hypertensives) due to the former’s potential to improve overall health and to reduce inequities. Focusing on high-risk individuals falls under the broader category of agentic interventions, which rely on an individual’s resources (material or psychological) to change health outcomes (McLaren et al., 2010; Capewell & Graham, 2010). Such interventions have been shown to widen inequalities, with people who are better-off benefiting the most from them. This pattern holds even in cases where disadvantaged groups are targeted, with those most in need still being unable to access help. Disparities in statin prescriptions are illustrative in this sense (e.g., Brown et al., 2019).Footnote 13 At the same time, agentic interventions do not address upstream causes of heart disease, originating in social and material conditions (Goldberg, 2022). Capewell and Graham (2010) have discussed how population-based interventions such as banning dietary transfats and regulations halving the quantity of salt in processed food are effective in both preventing heart disease and reducing inequality, since heart disease has higher prevalence among disadvantaged individuals. One important thing to note is that while these interventions are connected to a broader case for preventing heart disease through lifestyle rather than medication, the lifestyle changes are not incurred through individual effort, as agentic approaches would have it. Rather, recommended policies apply across the entire population, creating conditions for healthier lifestyle. Nevertheless, they do not always address the root causes of disparities– for instance, banning smoking in public spaces has been successful in reducing heart disease prevalence, but it does not address structural problems that drive particular groups to smoke, such as stressful working conditions.

Looking at this example through the perspective of the two suggested ways in which scientists can choose concepts, approaches, hypotheses, or evidence, shows that population-based approaches such as those discussed above are closer to the concept of justice adopted here, and thus they can be more trust-conducive. The two conditions are met as follows:

(a) potential to support a fair distribution of benefits and burdens

  • Population-based approaches– regulations to make available food healthier do not require special effort or resources from any one group to prevent heart disease.

  • Agentic approaches prescribing medication to high-risk individuals chiefly benefit those already better-off due to their increased access to healthcare and medication.

(b) potential to include perspectives from all the relevant actors

  • Population-based approaches take into account patterns more likely to obtain in less well-off groups which predispose them to heart disease– e.g., if they rely more on processed food due to time poverty, lack of cooking facilities, or living in ‘food deserts’, ensuring the food they can afford is (at least) less unhealthy. At the same time, such regulations ensure that the same food safety standards are available to everyone.

  • Agentic approaches are more narrowly shaped, based on the situation of individuals possessing the resources (material, psychological, social) that enable them to seek and access specific treatment.

Nevertheless, it should be noted that not all population-based approaches necessarily align with concerns about justice, as the recent example of COVID-19 and lockdowns helps illustrate.Footnote 14 With the caveat that different rules were implemented in different contexts, I focus on lockdowns as restrictions to movement and activity enforced as a one-size-fits-all approach. Looking at point (a) above, it can be noted that burdens have disproportionately affected poor and/or marginalized groups, since restrictions on movement have often meant being cut off from one’s sources of livelihood in contexts where no social security net was available.Footnote 15 At the same time, the expected benefits for disadvantaged groups were minimal, due to the higher rate of infection associated with living in overcrowded spaces, with worse access to sanitation. Furthermore, as the population in low-income countries is much younger on average than in high-income countries, for many people the outcomes of the disease may have been less severe than the economic effects of the lockdowns. Regarding point (b), the fact that under lockdowns some groups had to choose between being able to make a livelihood and obeying lockdown rules shows that their interests and situations were not taken into account. Similarly, alternatives suggested by various local communities were never implemented or tested (e.g., Adu-Gyamfi, 2022; Broadbent & Ncube, 2023).

The discussion of the examples above shows that considering concerns about justice when choosing approaches in public health is a complex process, going beyond individual versus population-based interventions. Nevertheless, broad patterns in terms of considering availability of resources, pre-existing inequities and power imbalances can be noted. My proposal has been to consider both distributive and procedural aspects of justice when choosing between approaches, hypotheses, or concepts. This could help restore trust, particularly in groups whose distrust is warranted. Yet, a further objection may arise here: whether an explicit commitment to justice as described above may not lead to distrust in other groups. The earlier considerations regarding epistemic and non-epistemic values connected to trust can help answer this concern. Insofar as looking at approaches through the lens of justice does not mean disregarding the epistemic standards, an account can be provided as to why a certain choice is desirable. With cardiovascular disease prevention, there is a good explanation why population-based approaches yield better results overall: because they apply predominantly to the groups that are affected the most by the said risks without demanding additional resources from them and because they address the main causes of heart disease. From the perspective of trust, such decisions show they take into account the interests of those at higher risk. One could make a further point here, that even for cases where the proximate overall benefit is the same for two approaches, those that reduce disparities would be preferable given their potential to foster or restore trust in marginalized groups, and longer-term health benefits emerging from that.

The discussion above has focused on choosing between approaches, but as mentioned earlier, a similar framework can be applied to concepts, methods, or evidence. To use evidence as an example, knowledge about the problems faced by discriminated or marginalized groups often relies on qualitative data and lived experience. Thus, a case can be made for studies employing relevant methods, particularly qualitative social science, and engaging with the relevant groups. By contrast, adopting too narrow views on evidence can go against the condition (b) above, namely neglecting knowledge that could help incorporate a broader range of interests. Problems along these lines have been also pointed out in relation to public health interventions during the COVID-19 pandemic (Lohse & Bschir, 2020; Lohse & Canali, 2021). This also illustrates the connection between epistemology and ethics: neglecting or excluding particular types of evidence can lead to an unequal distribution of risks and benefits among groups, giving rise to distrust.

6 Conclusions

Above, I have employed an analysis of trust which seeks to be politically useful to explore trust in public health. Particularly, I have argued that making public health more trustworthy by addressing cases of legitimate distrust requires taking concerns about justice into account when choosing hypotheses, approaches, concepts, or evidence. This can be integrated with defences of the value-ladenness thesis, particularly regarding how to choose between values, thus answering recent critiques of the science and values research program. I have made a case for trust-conducive values, focusing on justice. Nevertheless, further investigations are possible through other concepts of trust and other values, bringing philosophy of science closer to analyses of trust from social epistemology.

Similarly, I have relied on a notion of justice inspired by public health ethics, involving distributive and procedural aspects. Future research drawing on the literature on health justice can enrich the analysis by providing further means of exploring justice and its relation to trust in the context of public health. While I have chosen consequentialism to make a case for public health decisions aligned with concerns about justice, other ethical approaches can provide different avenues for connecting concerns about trust with public health ethics. Lastly, I have illustrated my view by looking at how approaches to cardiovascular disease prevention can be chosen on the basis of their potential to improve health overall but also to decrease inequalities. This stands in contrast with other decisions, particularly one-size-fits-all approaches during the COVID-19 pandemic. By showing that this kind of analysis has been previously employed in certain areas of public health research, my proposal is to expand it to further problems, as well as to seek a more nuanced stance. This would enable investigations of how, for instance, various population-based approaches can impact justice and equity in different ways.