1 Introduction

In 2015 the physics journal Physical Review Letters published, for the first time ever in the history of science, a paper with more than 5,000 authors. The findings reported in the article resulted from the combined efforts of two teams working with the Large Hadron Collider in Geneva. Their goal was to obtain a better estimate of the mass of the Higgs boson that was only discovered a couple of years before. Such instances of “hyperauthorship” (Cronin, 2001) illustrate the increasing importance of large collaborations in science. The complexity of the subject matter and, consequently, the knowledge and technicalities involved in studying it demand ever higher levels of specialization, which no single individual can master. As a result, scientists must put their trust in one another’s expertise to collectively attain the set goal without necessarily understanding the contribution of their colleagues and how all contributions, including theirs, lead to the desired outcome.

This important role of trust in science, however, is not new. Ever since the emergence of science, scientists had to rely on the work and the testimony of their peers and others in order to make progress (Haack, 2003; Longino, 2002; Ziman, 1968). As Newton once quipped, he could only have realized his legendary achievements “by standing on the shoulders of giants.” The production of scientific knowledge is not, and never has been, an individual, but a collaborative affair. As students and philosophers of science have become increasingly aware, science is social to the bone (Longino, 2002; Oreskes, 2019).

The process of putting trust in others is not peculiar to science. We constantly rely on other people to acquire information and realize our goals. In doing so we must gauge who is competent and honest and who is not: What sources can I trust? Who can I work with? As any humans, scientists too face this problem that is typical of communication and collaboration. They must sort out who they can trust as a source and/or a partner in the collaborative production of knowledge to avoid being duped. This implies that a scientist must convince her peers that her information is reliable and that she herself is an honest source and partner. She can achieve those goals by acting in honest ways, but also by providing reasons.

The critical discussion among scientists results in what Longino (2002) describes as “local epistemologies”. These epistemologies determine which beliefs and practices scientists within a particular research community find acceptable. Here, I will argue that local epistemologies also function as local moralities. They do not only determine what beliefs and practices are acceptable, but also which individuals are trustworthy partners in the collaborative pursuit of knowledge. Scientists, therefore, do not just provide reasons to convince their peers to accept their practices and beliefs, but also to regard and accept them as trustworthy partners in the production of knowledge. They use reasons as tools for reputationFootnote 1 management to convince their peers that they believe and act in justified ways. Science thus functions and can be understood as a moral system.Footnote 2

The paper has the following structure. First, I will first briefly discuss the role of trust in science and how this entails the problem of evaluating the trustworthiness of one’s peers. Second, as an example of the role of trust in science, I will discuss a historical study by Shapin (1994) of how early modern scientists solved this problem by recruiting the moral code of English gentlemen. Relying on the trustworthiness of English gentlemen to gauge one’s reputation, however, is too culturally idiosyncratic to explain the universal role of trust in science. Therefore, thirdly, I introduce cognitive and evolutionary perspectives on communication and cooperation to argue that in science reputation management is handled, at least to a large extent, by the production and evaluation of reasons. From this analysis I infer that science is a moral system recruited for knowledge production. I conclude with a discussion of the implications of this novel perspective for the study and philosophy of science.

2 Cooperation, communication, and trust

Science is the collaborative effort of producing knowledge. It is therefore slightly weird that traditional approaches in epistemology and philosophy of science have largely overlooked the role of communication and collaboration of science. Instead, they have focused mainly on individual means of acquiring beliefs, namely by relying on our senses and reasoning capacities (Hardwig, 1991). Information that we acquire from others was suspect because people might be wrong or worse, they might even try to deceive us. In recent decades, however, students of science have become increasingly aware of the social dimensions of science. Sociological and ethnological analysis made it blaringly clear that science is not just about accumulating objective data and rigid logical reasoning as some philosophers of science had pretended. Instead, science relies on everyday forms of social interaction and communication involving power struggles, groupishness, competition, emotions, gossip, and argumentation (Feyerabend, 1975; Latour & Woolgar, 1986 [1976]; Shapin & Schaffer 1985; Ziman, 1968). Epistemologists and philosophers of science have since then integrated these important corrective insights to understand and explain how the social processes of science result in reliable beliefs about the world (Haack, 2003; Kitcher, 1993; Longino, 1990, 2002).

The inherently collaborative and social dimension of science comes in various guises (Haack, 2003; Longino, 2002; Oreskes, 2019). One is the fact, as already mentioned in the introduction, that scientists build on the work of their predecessors and their colleagues. The idea of the scientific genius who, in splendid isolation, labours day and night on experiments and data to attain wonderfully new insights in the structure of the world is a myth. Important figures in the history of science such as Newton, Lavoisier and Darwin could only develop their new theoretical perspectives by relying on the ideas and findings of others, as they fully realized themselves.

Another aspect is the division of cognitive labour (Kitcher, 1993). The world is too complex for one individual to study and understand it in all its aspects. Scientists must specialise in one domain, or a subdomain of that domain, and must leave the other domains to others. Even when investigating a specific research question, scientists rely on one another’s skills and expertise to obtain an answer. A third aspect is the organized criticism (Merton, 1973) that we find in peer review, where experts are invited to evaluate the strengths and weaknesses of the articles submitted by their peers. Scientists too are often blind to the errors and omissions in their own research. By taking a fresh and critical eye, their colleagues can more easily spot and report them, a process that tends to result in more reliable studies (Longino, 2002).

Sciencee is a social enterprise and communication plays a crucial role in its processes. Not only do scientists communicate through institutionalized channels such as books, journal articles, and conference presentations, but also, and more informally and frequently, in discussions and talks that take place in seminars, labs, offices, over the coffee machine, and at conference dinners. Furthermore, as Longino (2002, pp. 99–107) argued, observing and reasoning, of which philosophers traditionally thought of as individual means of acquiring knowledge, are social activities that involve communication as well. Both are processes through which scientists reach a consensus by arguing with one another. Science would simply not be possible without communication and collaboration.

Scientists can rely on one another to collaboratively create reliable representations of the world. However, communication and cooperation also bring certain problems, one of which is that others might deceive or manipulate us (Heintz et al., 2016; Sperber et al., 2010). The question that arises is what information and whom to trust (Mercier, 2020).Footnote 3 Scientists too face this problem but given the ubiquity of communication and cooperation in science they seem to have successfully solved it. But how exactly have they done so?

3 A league of gentlemen

Traditional epistemology has long displayed an “individualistic bias” (Hardwig, 1991, p. 701). The main question to solve was when an individual has knowledge, i.e., justified true belief. To gain knowledge relying on others was not an option. If I do not know someone else’s reasons for their belief, then how can I be justified in adopting that belief? How can I tell that the other is not mistaken, or even worse, lying to me? Hardwig criticized this scepticism and argued that trust in others is not only acceptable but necessary to gain knowledge. Scientists constantly rely on another and therefore need to trust their colleagues. He wrote, “belief based on testimony is often epistemically superior to belief based entirely on direct, non-testimonial evidence”, when others have “epistemically better” beliefs that oneself (Hardwig, 1991, p. 368). Doing away with trust in knowledge production is not an option because “the alternative to trust is, often, ignorance.” (Hardwig, 1991, p. 707) If we want to understand scientific knowledge production, we must investigate the role of trust in science. Since then, philosophers have indeed taken up this important task (e.g., Frost-Arnold 2013; Rolin, 2020; Wilholt, 2013).

One way to study how scientists solve issues of trust is to look at the time and place when modern science emerged, which is, among others, in 17th century England. In The Social History of Truth (Shapin, 1994). Shapin investigates how scientists rely upon and thus trust one another to make their collaborations work. Scientific knowledge requires a group of people who come to an agreement about what is the case, and so decide what counts as truth. Shapin’s analysis very much emphasizes the role individual scientists play in the production of knowledge, a role that they tend to render invisible when they report their research to create the impression that science is an entirely objective affair, that does not rely upon the activities and the interactions of people. Indeed, scientists tend to focus merely on content and simply report and justify their methods and beliefs to convince their peers. They do not even tend to mention their own role in the research, let alone of the people they consulted and argued with. It is, therefore, not a surprise that traditional philosophy of science focused almost exclusively on the objectivized aspects of science such as formal methods and theoretical statements. However, by providing such objectivized accounts, both scientists and philosophers deliver a strongly distorted image of how scientists produce knowledge.

To attain a proper understanding of science one must consider the fact that science is a collaborative process that builds on trust. This implies that scientists do not only make a judgment about the reasonableness of the transmitted content, but also about the credibility and honesty of the communicator (see also Hardwig 1991, p. 707). In fact, our assessment of the reliability of these beliefs is entirely intertwined with our evaluation of the trustworthiness of the source. As Shapin writes:

“What we call ‘social knowledge’ and ‘natural knowledge’ are hybrid identities: what we know of comets, icebergs, and neutrinos irreducibly contains what we know of those people who speak for and about these things, just as what we know about the virtues of people is informed by their speech about things that exist in the world.” (Shapin, 1994, p. xxvi)

By attributing trust, scientists bring an important moral dimension to the production and distribution of scientific knowledge. According to Shapin, scientists in seventeenth century England handled these moral dimensions by recruiting the moral code of gentlemen. A gentleman-scientist was supposed to be well-mannered, that is, to be honest and sincere, so that his peers could trust his testimony. Consequently, it was important for scientists to maintain their reputation of a gentleman by abiding by those manners. By doing so, they do not only show worthy of respect by their peers, but also that they treat their peers with respect, that is, as gentlemen and thus worthy of our moral considerations (on “the moral significance of manners”, see Buss 1999).

Shapin’s analysis brings into sharp focus that science is a collaborative project which involves relying on the testimony of others. Hence, scientists need to decide what information they will accept as true, a judgment that they often make based on their assessment of the honesty of the source, which imports a moral dimension into science. This also means that scientists will guard their reputation as a trustworthy source and a reliable collaborator in the production of the collective good called knowledge. However, the English gentlemen code is far from universal, but instead tied to a particular time and space that is not available to scientists anymore. The code relied on a form of trust that suited English gentlemen, but typically excluded other voices such as those of women and other marginalized groups (Baier, 1986). Since such exclusion undermines the diversity required for knowledge production (Longino, 2002), the gentleman code is not suitable to solve issues of trust in contemporary science. The question then arises: How do scientists today solve these issues? By relying on recent insights from cognitive and evolutionary psychology, I will argue that scientists commonly rely on social processes involving cooperation and communication that were not alien to English gentlemen either. As such, we will not only be able to develop a more profound understanding of how science functions as a moral system, but also provide an answer to the question why it does.

4 Trust by vigilance

Shapin acknowledges that science has changed tremendously over the centuries and hence requires different solutions to the problem of trust. In the seventeenth century natural philosophy scientists knew one another personally. In modern science, however, scientists extend their trust to their colleagues not because of their virtuous character but because of their expertise that is guaranteed by institutions. However, Shapin claims, science still fragments into “core-sets”, small communities of scientists that collaborate with one another to investigate and solve a particular problem. Members of such core-sets interact with each other like the scientists-gentlemen of the seventeenth century. Nevertheless, within or outside such core-sets, trust continues to play a crucial role in science, even more so than in our daily lives.

Shapin’s analysis shows that whatever cultural context scientists find themselves in they must sort out who and what to trust. In this sense, they are no different from other humans. In our everyday affairs we constantly rely on information we acquire from others: we visit our physician to ask about our health, ask people for directions, and learn about places to visit with the children from our friends. Extending trust to people comes natural to us. However, trust is not the same as blind trust (Mercier, 2020). When we trust people, we rely on mechanisms of epistemic vigilance by which we intuitively evaluate both the content and the source of the information (Sperber et al., 2010). We check whether the new information is consistent and coheres with the beliefs they already hold. We also assess whether the source is competent and benevolent. Only when the content and the source tick the right boxes, we will accept the provided information.

Vigilance has the effect of making people honest, which, in turn, explains why our trust is usually not misplaced. People intuitively realize that their audience will critically evaluate the information they provide, and hence adjust the information to the expected standards of their audience (Sperber, 2013). Moreover, since people will not only gauge the reliability of the content, but also of the source, it pays off to provide people with accurate information so that one builds a reputation of a competent and benevolent source (Altay et al., 2020). Because the targeted audience expects the source to have such considerations, they can then expect both the content and the source to be trustworthy. Trust grows in vigilant soil.

The fact that people often have misbeliefs indicates that the processes involved in epistemic vigilance are not entirely fool proof. One can be wrong in one’s evaluation and become a victim of deceit. Scientists can commit fraud because their peers expect them to submit to certain norms of scientific conduct (Ritchie, 2020). However, the problem with vigilance is not so much that we are overly trusting, but quite the opposite (Mercier, 2020). We tend to discard useful information more than we accept wrong information. As soon as the new information clashes with our previously held beliefs or when we do not trust a source for whatever reason, we shut our epistemic gates and hence miss out on lots of opportunities to learn important lessons from others. However, the source is not entirely without ammunition as she can crack open the gates by persuading the addressee to accept the information she provides. She can do so by providing reasons (Mercier & Sperber, 2011, 2017).

5 Reasons

Philosophers and psychologists have since long acknowledged and emphasized the role of reasoning in the production of knowledge, even at the expense of other sources of knowledge such as experience and, especially, communication. Plato, for instance, believed that reasoning enables our mind to escape the vagaries of our lives and attend to the all-important supernatural realm of immutable and perfect forms. Descartes assumed that by thinking clearly and distinctly he could attain unassailable divine truths about the world. More recently, psychologists who work under the banner of dual system theory, have proposed that the reflective or reasoning system corrects for the mistakes of the intuitive system and hence enables us to improve our individual thinking (e.g., Kahneman 2011).

The recently developed interactionist theory by Mercier & Sperber (2011, 2017) challenges this traditional view and argues that reasoning is a social rather than an individual process.Footnote 4 The authors start from the observation that if the function of reasoning is to enable us to attain better beliefs on our own, it does not do a very good job. People tend to make systematic reasoning errors such as the confirmation or myside bias. The authors suggest that these errors can be explained if one assumes that our reasoning serves a social function, i.e., to convince others and to justify ourselves which are everyday processes. A shop assistant might try to sell you a sweater by mentioning the quality of the fabric or the low price. When the young dentist accidentally touches the nerve, she might invoke her inexperience to account for her clumsiness.Footnote 5

People, however, remain vigilant and are not easily swayed by reasons and this for, well, good reasons. The production of reasons tends to be “biased and lazy” (Mercier & Sperber, 2017, p. 9). Biased, because we will tend to look for arguments that support our case (which explains the confirmation or myside bias); and lazy, because why would one invest time and energy in finding a better argument if a readily available but weaker one suffices. As addressees want to avoid accepting false information or endorsing behaviour that might harm them, however, they will be more critical in evaluating the provided reasons (Sperber et al., 2010). The result of such vigilance is that the producers will have to make an effort and provide reasons that not only reflect one’s own point of view, interests and concerns, but also tie into those of their addressees’, and hence become less self-serving (Mercier & Sperber, 2017).

Interactive reasoning plays an important role in science as well. As sociological studies of science have pointed out scientific reasoning is not a matter of rigidly applying the rules of logic, but about exchanging arguments and providing justifications and hence is not different from ordinary reasoning. For instance, Ziman (1968, p. 8) notes in his Public knowledge: The social dimension of science that “the reasoning used in scientific papers is not very different from what we should use in an everyday careful discussion of an everyday problem”. Scientists try to convince their peers of their views to make them acceptable so that they become part of the consensus. In a similar vein in their famous anthropological study of science Latour and Woolgar (1986 [1976], p. 76) argued that the everyday process of argumentation is key in understanding how scientists create facts:

(...) everything taken as self-evident in the laboratory was likely to have been the subject of some dispute in earlier papers. In the intervening period a gradual shift had occurred whereby an argument had been transformed from an issue of hotly contested discussion into a well-known, unremarkable and non contentious fact.

Longino (2002) labels this process, by which interactive reasoning changes individual opinion into knowledge as “transformative criticism”. Scientists provide arguments and justifications for their hypotheses and practices. The individuals who manage to convince their peers will have their beliefs and practices accepted by their research community as objective knowledge ((in the sense of an intersubjective consensus). These are the beliefs and practices of which the community assumes are the best supported by reasons. Reasons thus also set the standards for what counts as knowledge and proper ways of producing it. They determine what beliefs and practices are acceptable to the community. What these standards are differ from domain per domain, problem per problem, and hence from community to community. For instance, ways of producing knowledge in high energy physics are markedly different from those in molecular biology as their respective subjects require different methods, different technologies and artefacts, different notions of what counts as an observation, etc. (Knorr Cetina, 1999). Hence, scientists working on a particular domain create what Longino calls a “local epistemology”: Through their interactions they set the standards for knowledge not for the whole of science, but within their own research community.

In presenting their work scientists will try to convince their peers that their beliefs and practices meet the standards of their community. As such, they will put them, as Sellars (1963, p. 107) famously noted, in the “logical space of reasons”.

The focus on beliefs and practices, however, blinds the involvement of human individuals. Talmont-Kaminski (2020), for instance, argues that in science vigilance is primarily targeted at the content, not so much at the source of information. However, scientists will not only be concerned about checking the trustworthiness of communicated information and its source, but also to what extent their peers can be relied upon as collaborators. I will argue next that the reasons that determine the fate of scientific beliefs and practices, also regulate the behaviour and interactions of the individuals who bring them about.

6 Local moralities

Science is the collaborative effort to produce knowledge about the world. As collaborators, scientists thus face the problems that are typical of and common to all forms of cooperation. One crucial problem that any individual must solve is to figure out who is a reliable partner and who is not. If one has several options available, then one enters a biological or cooperative market that consists of all potential cooperative partners. In such a market the individual can shop for trustworthy collaborators (Barclay, 2013; Noë & Hammerstein, 1995). An individual does not care to collaborate with someone who wants to profit from the interaction at one’s expense. As such, we can expect any individual to exercise strategic vigilance, that is to look for cues that reliably indicate the trustworthiness of a potential partner (Heintz et al., 2016). One important such cue is an individual’s reputation (Raihani, 2021, p. 195). If a person has the reputation of having reliably collaborated in the past, then this might indicate that she will act accordingly in the future. As such she indirectly reaps the benefits of her previous collaborations (“indirect reciprocity”, see Alexander 1987). A person’s reputation can be gleaned from her track record – how has she behaved so far? – or established by hear-say – what are other people’s experiences in collaborating with this person? (Alexander, 1987) In response, an individual will try to make a good reputation for herself by reliably collaborating so that people can see for themselves that she is a reliable partner and that people will say nice things about her (Dores Cruz et al., 2021; Heintz et al., 2016). However, how people interpret your actions and what they will say about behind your back is not entirely under one’s control (Origgi, 2018). But one is not entirely at the mercy of others’ evaluations and gossip either. One can actively manage one’s reputation not just through one’s actions but also by providing reasons (Mercier & Sperber, 2017).Footnote 6

Reasons plays an important role in reputation management because it enables people to indicate that they are reasonable and hence cooperative community members. People commit themselves to the norms that these reasons imply (Mercier & Sperber, 2017). For instance, when Sara argues that she does not put the heat on anymore because she does not want to support Putin’s regime, she thereby commits herself to that standard. To the extent that others find this standard acceptable they will assume that Sara’s decision is justified and that she herself acts on good reason. As Mercier & Sperber (2017, p. 186) note “good reasons are seen as justifying not just a thought or an action, but also the thinker of that thought, the agent of that action.” In a similar vein, scientists are justified when their beliefs and actions are supported by reasons their peers find acceptable. By providing the right sorts of reasons (e.g., the used method as a reason to justify the data, the data to defend the hypothesis, and so forth) a scientist does not only attempt to justify her beliefs and behaviour, but she is also trying to justify herself. She indicates that she behaves and thinks in ways a member of her research community is expected to behave and think.Footnote 7 Hence, the reasons that scientists provide do not only result in local epistemologies, but also local moralities. They determine what a scientist is allowed to say and do if she wants to be known and thus build a reputation as a reasonable and hence cooperative member of her scientific community. If a scientist wants to do or say things that the current set of reasons does not permit, she will try to alter the set by introducing new reasons so that she is justified in her belief or actions after all, just as in any moral system. She thereby does not only put her beliefs and actions in the logical, but also the moral space of reasons.

Although reasons play a crucial role in science – they determine which individuals, actions and beliefs are acceptable – their impact might be mostly indirect. Reasons usually function as post hoc rationalizations, justifying the outcome of processes that crucially do not depend on reasons. The function of reasons is not to accurately describe our mental states or attitudes, but we employ them as social tools to manage our reputation (Bergamaschi Ganapini, 2020; Dennett, 2017; Haidt, 2001; Kurzban, 2010; Mercier & Sperber, 2017). In fact, scientific processes tend to be a lot messier than scientists pretend when they provide reasons (Latour & Woolgar, 1986 [1976]). As Popper (1972) already realized scientists do not proceed as rationally as we might like to think. They work much more intuitively, based on trial and error. They have hunches about possible solutions and the roads that might lead to them. When they are lucky, their hunches turn out be correct but often scientists end up in dead alleys (Grinnell, 2009, p. 9). But when things go wrong, scientists usually do not give up easily. They provide rationalizations for why the research did not work out the way they assumed thereby displaying a myside bias as any human being (Mercier & Heintz, 2014). They reconsider their position only when their peers make them to through interactive reasoning (Dunbar, 1995; Mercier & Sperber, 2017, p. 318).

Nothing of this messiness ends up in the official reports that we call scientific articles. Take any journal article and you will not read an accurate description of the going-on in the lab, but, as Ziman (1968, p. 34) aptly notes:

The work as published is no mere chronicle of the research as it took place; it is a much more contrived document, with its logical teeth brushed and its observational trouser seams sharply creased.

However, reasons do exert an ante hoc effect as scientists will try to conduct their research in such a way that it enables them to justify their newly introduced beliefs, methods, or results and themselves when they present their work. For instance, they will rely on the methods of which they know they are acceptable within their research community. By following those methods, they will later be able to convince their peers of their results and the hypothesis they support. Reasons thus influence the course of science as scientists adjust their views and actions to the normative expectations of their peers. However, when their research does not deliver the expected outcome scientists might manipulate their study to get a result after all. Strategies such as p-hacking, file-drawering and even outright fabricating data become tempting options (Ritchie, 2020). To prevent such misbehaviour an increasing number of scientific journals demand preregistrations. These preregistrations might suggest that reasons directly cause scientists to behave in such and such a way. However, given the function of reasons in general it seems more plausible that preregistrations do not simply provide accurate descriptions of scientists’ psychology but constitute commitment devices by which scientists promise to behave in acceptable ways. They are tools for reputation management.

Scientists also adjust to these normative expectations by presenting their work in an impersonal style, giving the impression that their own individual activities and contributions are of no concern (Grinnell, 2009). This move entirely blinds the fact that science depends on individuals seeking to establish a good reputation as a reliable trustworthy partner in the collaborative pursuit of knowledge. However, as Shapin (1994) argued, it is not because the role of the individual and hence the moral character of science is largely blinded or transparent that it does not exist. By relying on cognitive and evolutionary insights about human cooperation and the role of reputation therein I have tried to bring this moral character to light. In the final section I will argue that this has important implications for our understanding and the philosophy of science.

7 Discussion

7.1 Science and reputation

What follows from the account of science as a moral system is that science should display typical features of such a system and that it can explain why it has such features. One is that scientists do not only evaluate beliefs and practices but also their fellow scientists. This means that when scientists behave improperly such as in the case of fraud their peers will not only reject the beliefs, practices and data that are involved and result from fraud; they will also punish the perpetrator, usually by ostracizing her. A scientist who behaves in ways that are unacceptable to the community is banned from the cooperative market and hence no longer allowed to participate in the production of knowledge.

Because violating one community’s normative expectations can bring high personal costs, most scientists will tend to believe and behave in acceptable ways to procure their reputation as a reliable collaborator (Blais, 1987). However, psychologists theorize that humans are probably not Machiavellian strategists who calculate their reputational score and only do good when they know people are watching – although people do behave better when being watched (Baumard et al., 2013; Heintz et al., 2016; Sperber & Baumard, 2012). As such calculations would probably miss opportunities to promote our reputation, most humans are disposed to act pro-socially, so that will tend to act in ways that further collaboration. Hence, they demonstrate that they are reliable collaborators without necessarily being consciously aware of the reputational gains their behaviour imports (Heintz et al., 2016). In the same vein we can expect scientists to adopt a pro-social stance and do what they are expected to do to advance the collaborative production of knowledge (Frost-Arnold, 2013; Rolin, 2020).Footnote 8 Despite the alarms raised about the decay of science, most of the scientific production of science will live up to scientific standards because overall scientists will tend to behave in ways that they can justify. This does not mean that no scientist will be tempted to cheat or that certain conditions such as high publication pressure might lead more individuals to risk reputation damage and break the rules. Or, that the standards are always what they should be (Ritchie, 2020). Therefore, efforts to raise the standards of science under the banner of open science including the preregistrations should be welcomed. But overall, we can expect most scientists to be honest without much need for external measures. Their prosocial nature will direct them to acceptable beliefs and behaviour.

Practically, the tendency of scientists to conform to the prevailing norms might bring good news for science. Its standards are upheld because scientists desire to be and to be known as reliable collaborators in the production of knowledge. At least, we must take these considerations into account when calculating the costs and benefits of measures to improve science. If a large majority of science is already prone to behaving well, then most scientists might not need extra policing so that the costs of the measurements might be outweighed by their benefits. However, scientists’ desire to conform may also have negative effects. When a scientific community gets stuck by continuing practices and endorsing beliefs that are unjustifiable when evaluated from outside the community, many members of that community will still tend to bring their individual beliefs and behaviour in line with the available reasons, rather than challenge them. It takes courage to dare to question the status quo and endeavour to alter the standards of the community as in any moral system (Pennock, 2019). As one does not behave or believe as expected, one risks being ostracized by the community. The conservative streak in science is understandable as most alterations of standards do not result in improvements. Think, for instance, of purveyors of pseudoscience who continue the question the standards by which they are excluded. However, at times, some of the changes bold individuals suggest are worth the risk as they eventually make it to the consensus and bestow the individual or individuals who introduced it with a reputation beyond the grave.

Cognitively, we can predict that the processes by which scientists defend and adjust their beliefs and practices depend on the same mental mechanisms and processes by which people adjust their beliefs and behaviour to the normative expectations of their social surroundings in general. For instance, we can expect moral emotions such as guilt, shame, and anger by which we commonly regulate one’s own and others’ behaviour to affect scientists’ behaviour as well. We can also hypothesize that the way in which scientists invoke reasons in support of their beliefs and practices is similar to how we use reasons to justify ourselves in our everyday lives and manage our reputation. Conceptual change, which is an important phenomenon in the development of science (Kuhn, 1962; Thagard, 2012), can then be understood not just as a process by which individuals adjust their beliefs with the intention of making them more accurate, but also by which they bring them in line with the normative expectations of their relevant peers. This could mean that conceptual change is difficult to realize in the minds of students not only because scientific concepts are highly counterintuitive (Shtulman, 2017) but also because students are expected to bring their beliefs in line with the normative expectations of a group whom they do not care about. As they do not run a serious risk of reputation damage (and can even gain a reputation bonus with the people they do want to associate with), they feel no urge to adjust their beliefs. These predictions are admittedly speculative. However, I hope to have shown that thinking about science as a moral system enables us to develop novel hypotheses about the cognitive processes underlying science that warrant and might inspire further empirical investigation.

7.2 History, sociology, and philosophy of science

Historians, sociologists, and philosophers of science have studied the moral dimensions of science at length. However, by introducing cognitive and evolutionary insights on human cooperation, thinking about science as a moral system does not only establish and confirm that science is a moral system, but it also delivers us insights in the cognitive and communicative processes that give rise to it. Furthermore, it explains why science has important moral dimensions in the first place. The reason is that scientific knowledge production recruits cognitive and communicative processes that are geared at cooperation involving the identification of trustworthy sources and partners, the production and evaluation of reasons, and reputation management. As such, the conception of science as a moral system provides a framework theory that helps to make sense of the historical, sociological, and philosophical studies.

Let us first return to Shapin’s historical and sociological account of the role of trust in science. According to Shapin the natural philosophers at the start of modern science settled the problem of trust by attributing to one another the essential character of a gentleman. If one had such a character, then one was supposed to be honest. By attributing and essentializing such a trait, however, they were not providing an accurate description of the character that was peculiar to each member of their community. Instead, they were merely vaguely labelling their expectation that any of their colleagues would submit to the normative expectations of their peers and thus be honest to maintain their reputation as a member of the community. Their membership however crucially depended not on having an essential trait but on their disposition to act and think in ways that their community of natural philosophers found acceptable. And even gentlemen had to rely on reasons to justify their beliefs, methods and findings and demonstrate their reasonableness and trustworthiness. Hence, we can understand Shapin’s analysis as a particular historical case-study of a solution to a problem that is common to all scientific endeavours as they are instances of human cooperation.

An example of a sociological analysis that focuses on the moral dimensions of science is Robert Merton’s theory about scientific norms (Merton, 1973). According to Merton, science is characterized by four such norms, i.e., universalism (everyone can contribute to the production of knowledge based on objective criteria), communism (scientific knowledge is a common good contributed by and accessible to all scientists), disinterestedness (scientist should not pursue a personal agenda but should aim to contribute to the production of knowledge), and organized scepticism (scientists look critically at their own and one another’s work, see the discussion above). Merton’s account was more about the moral dimension of science as an institution and less about the behaviour of individual scientists. Nevertheless, it recognizes that science comes with a normative framework that regulates the beliefs and practices of scientists. Thinking about science as a moral system explains why. If scientists do not behave and think according to these norms, they do not act and think in justifiable and hence acceptable ways and they will no longer reliable cooperative partners in the production of knowledge. The account sketched above thus makes more precise the cognitive and communicative processes by which scientists conform to such institutional norms.

The account of science as a moral system also sheds light on recent philosophy of science approaches based on virtue epistemology. Virtue epistemology builds on the idea that knowing depends not solely on the content of knowledge but on the proper disposition of the knower. Philosophers of science emphasize that science involves a particular mindset, a set of virtues, or a scientific attitude, by which scientists strive to live up to the values that are deemed central to the scientific enterprise of knowledge production (McIntyre, 2019; Pennock, 2019). The focus thereby shifts from scientific beliefs and practices to the individual scientist. Instead of asking what conditions belief and practices must meet, the question becomes what the mindset, virtues of attitude the scientist must adopt. Such a “philosophy of the scientist”, as Pennock (2019, p. 11) puts it, brings attention to the important role scientists as individuals play in science which accords with the account that I presented above. The scientific mindset or attitude comes about as reputational concerns motivate scientists to bring their beliefs and behaviour in line with the normative expectations of their community. Scientists who are disposed to behave and think according to those expectations are virtuous. Scientists bring attention to this alignment by providing the sorts of reasons of which they think will indicate that they are justified in believing what they believe and doing what they do.

In sum, that students of science have been able to analyse and describe science in moral terms such as norms and virtues suggests and can be explained from the fact that science constitutes a moral system.

7.3 Scientists anonymous?

A possible objection to the account of science as a moral system and the role of reputation therein is that science’s organized scepticism comes largely in the form of anonymous peer review. Since the authors are unknown to the referees (and sometimes also to the handling editor), the referees cannot evaluate them as individuals. This seems to contradict one of the central ideas of this paper namely that scientists do not only evaluate the products of science but also their producers. However, peer review could be interpreted as an institutional recognition of the fact that in evaluating beliefs and practices we also evaluate the people who are responsible for them. In fact, it is a specific and unique cultural construct that prohibits people from doing so. As such, the institution of peer review might be quite counterintuitive. The reason is that we want to hold people accountable for what they say and do. If that is not possible because people are anonymous, we intuitively realize that people are more tempted to misbehave. And we are not wrong. Think, for instance, of the cruel reports that some referees dare to write. We also see this concern about anonymity in the controversy surrounding the Journal of Controversial Ideas that allows authors to publish their theories anonymously. The establishment of the journal itself constitutes a recognition of the fact that we do not only evaluate beliefs but also their producers. If an author proposes an idea the community finds unacceptable, this might have dire consequences for the author’s reputation and career. However, the journal’s critics have argued that such anonymity gives anyone the opportunity to publish immoral or pseudoscientific ideas because they cannot be held individually responsible (e.g., see Stokes 2021). The way that peer review is now usually organized might be an attempt to strike a balance between anonymity and accountability. By anonymizing the authors during the review process the referees are forced to focus on the reported beliefs and practices without evaluating the individual scientist. However, when the work is published the anonymity is lifted, not only so that scientists can reap reputational benefits from their work but also that they can be held responsible for their beliefs and behaviour. Such accountability thus discourages scientists to misbelieve and misbehave and motivates them to bring their beliefs and behaviour in line with the normative expectations of their peers.

8 Conclusion

I have here only scratched the surface regarding the implications of thinking of science as a moral system. However, the sketch above points in the direction of a novel approach to science. Science does not build solely on our capacities to develop knowledge about the world. It also recruits our capacities for creating and altering moral systems that enable and regulate cooperation and to which individuals adapt because of reputational concerns. In our collaborative efforts, truth is only one of our concerns; it is only instrumental for making cooperation successful. However, with science, humans have developed a peculiar and historically rare moral system that puts the production of knowledge as its main target (Pennock, 2019). I do not thereby wish to imply a sceptical view on science in which scientists’ pursuit of reputation undermines their pursuit of truth. This conclusion does not follow. My account only brings to the fore that for individuals who wish to participate in the collaborative production of knowledge reputation is a valuable good. If a scientist wants to pursue truth, then she better believes and acts in justifiable ways, which she demonstrates by providing reasons which she shops for in the available pool of reasons. Otherwise, she runs the risk of becoming ostracized and excommunicated and, consequently, she will have to resign her pursuit of the truth. A conclusion that does follow is that we can better understand how science works by studying it as a moral system; and, vice versa, we can understand moral systems better by the study of science.