Continuing our argument of splitting the workings of culture and practice into four dispositions of doing, valuing, knowing and accounting, we propose to distinguish between two main categories of norms and values. On the one hand, we discern those norms and values that are sanctionable. One typically experiences unfavourable consequences if one does not live up to them. On the other hand, we discern those norms and values that are aspirational: things that are good to do and that possibly make you into a bad scientist if you don’t hold them dear, but where there is no formal way to enforce them. The rationale behind this tentative classification is that sanctionability naturally places an issue at the institutional level: it is literally enshrined in rules, regulations, procedures and formal responsibilities of offices how sanctions are shaped and executed. Thus, if the world only consisted of institutional and individual responsibilities, sanctionability would be an informed guess of where the boundary is. This rationale thus guides our inquiry into culture: as a working hypothesis, sanctionable values are a concern of institutions and management, whereas aspirational values are a concern of research scientists and the research communities they work in.
One important proxy question to this boundary condition is who or what suffers in case the value is breached, which provides a direct answer to the question of accountability. With aspirational values, the consequences of breaching are primarily for the researchers; they will typically suffer a loss in reputation. Conversely, when breached, sanctionable values lead to liability for the institution, damage for eventual patients or research subjects, or a corruption of the body of scientific knowledge (see Shaw 2019 for a treatment of this last point). Thus, the question whether a value is aspirational or sanctionable also depends on the distribution of benefits, ownership and liability, and hence power, between the researcher and the institution.
The distinction between sanctionable and non-sanctionable values is compatible with the observation by Horbach and Halffman (2017), who show that sanctionable values are more the language of policy makers and journalists, whereas aspirational values appear more in the language used by scientists themselves. Similarly, Israel and Drenth (2015) note that aspirational values fall behind in terms of their effectuation in practice. Finally, it resonates with the observation by Davies (2019) of a tension between ideals of good science that researchers aspire to, and the abstract, principle-based codes that seem not to capture these ideals. The exact distinction between sanctionable and aspirational values remains contingent, and consequential for what practice will prevail, which is exactly why this level of practice merits further explanation in RI theory.
Sanctionable Values
Sanctionable values are in a way the “hard boundaries” of what gets defined as proper scientific research. According to Plemmons et al. (2006), knowledge of these principles is successfully conveyed in RI courses. One could think of the avoidance of fabrication, falsification and plagiarism. Other clear examples are the proper use of informed consent in case of medical research, and the principle in the engineering sciences not to accept assignments for which one lacks the proper qualifications. Also, we may somehow expect these hard boundaries to play out in explicit ways in who is included in the practice or not.
In the following, we highlight four values that circulate primarily as sanctionable. The list is not exhaustive and even to some extent arbitrary. The items are merely intended to exemplify how such sanctionable values can be thought to connect to substantiated notions of practice and culture.
Avoidance of Falsification, Fabrication and Plagiarism
Falsification, fabrication and plagiarism (FFP) count as the epitomes of a lack of RI. Through plagiarism, credit is withheld from the people who have actually done the research. And through fabrication and falsification statements enter the scientific knowledge base that are in fact untrue (Shamoo and Resnik 2015, p. 38). Such cases are typically resolved through institutional measures, but it is worth asking how FFP can emerge, in light of the above definitions of practice and culture. Perhaps there are circumstances that at least enable people to “give it a try” to get away with improper behaviour—even though today, most institutions and publishers have access to some form of plagiarism check (Luparenko 2014). Though these automated checks are not (and probably will not very soon be) perfect, it requires skills and intricate knowledge of the whole chain of scientific knowledge production to get away with plagiarism. These chains of knowledge production are discipline-specific and practice-specific. Hence, in order to stand a reasonable chance at successful plagiarism, one has to be a member of the practice in the first place.
A similar argument can be made about falsification and fabrication. If researchers want made-up knowledge to appear credibly, they need intricate knowledge of how their claims will be assessed in the peer-review process. This knowledge is only available in the practice itself, and can only be learnt in the same way other skills are transferred in practice: through mentoring, practising, and various forms of teaching.
This means that apart from the obvious sanctioning of FFP-related misconduct, the ways in which the practice itself makes such conduct possible in the first place, could be subject to further reflection. In a way, the usual training is a perfect preparation to actually commit the transgression. Carrying this to a conclusion on a substantiated notion of practice, it could be suggested that the master-apprentice relationships in which the skills are transferred, could do with more reflection on how such skills can (and should not) be abused. Similarly, the repetitiveness of practices could be taken as an object of reflection in case misconduct emerges: what where the patterns of action that led to the misconduct, or at least have failed to eliminate it? Has anything been lacking in those patterns that could over time have served as an additional safeguard against the mishap taking place? And, to relate to the different roles that “things” can have in a practice: is there any way in which the infrastructure of automated plagiarism checks could have been used or arranged differently, so as to improve its performance (possibly combined with additional human skills), to prevent plagiarism?
In terms of the questions of culture and practices, it is clear that even if a practice does not force a person to “do” FFP, it at least enables them to. At the same time, material entities like plagiarism checks counter this ability to some extent. Also, the practice expresses an ambiguous valuation of cutting corners: it should not be done, but if successful, it may help one’s career move on.
Fair Credit
Closely related to the problem of plagiarism is the fact that scientists are assumed to be fair about what is their own merit, and what is the work of others. Authorship should be attributed to the people who have actually deserved it through their work. People should also be credited through other means, for instance by citing their work (Plemmons et al. 2006). Consoli (2006) shows however that the category of “author” is far from unproblematic: notions such as “responsibility for the output” and “relative contribution to the output” are hard to quantify or compare to some sort of threshold. Also, it is clear that many aspects of the incentives and rewards for authorship that define the landscape in which publishing takes place (Martinson 2017), are in fact such that fair credit is in fact not always an attractive way to go. In addition, there are very clear power relations between seniors and juniors that disturb practicing fair credit (Shah et al. 2018). Thus, the meaning of the category “author” is not self-evident and univocal, which means that it will receive different specifications in different contexts. Taking and giving credit corresponds directly to the distribution of accountability and responsibility. This is thus in fact a mechanism through which culture may play a more important role than institutional relations or individual virtue, and socialization into a practice reproduces it.
Thus, the intricacies of fair credit and the diversity of practical implementations of it, clearly form a clue towards where a culture may prioritize a specific valuation over others. Also, who is accountable for the exact acknowledgement of credit will differ between practical situations: in some disciplines, hierarchy is such that research leaders are co-author by default, and others are not. At exactly this point, Thornton (2013) argues that entitlements are dominantly shaped by masculine and neoliberal norms.
Transparency
Research should be transparent, or so the consensus can be assumed to be. An editorial in Nature (2017) provides 5 steps to substantiate transparency: pre-registration or publication of a research protocol prior to conducting the research; pre-publishing a draft before final submission of the paper; releasing the data analysis plan; releasing the analysis code; and publishing the data set. It needs saying that these steps are deeply ingrained with a biomedical and natural science approach, and generalizing them to other fields, notably social sciences and humanities, might involve some critical and problematic translation. What such steps would look like in a strictly theoretical exercise like mathematics, or for example in anthropology where anonymity and confidentiality are key to the production of data in the first place, remains to be debated (see also Penders et al. 2019; Irwin 2018). For example, Spier (2006, p. 189) emphasizes the need for rigor in method and reporting. It is at this point already instructive, as Consoli (2006) argues in reference to the US Federal Policy on Research Misconduct, that the presentation and publication of proper facts is- in that policy—considered a more important responsibility than the exact conduct in the lab that precedes that very publication. Remarkably, in a large-scale study on how scientists conceive of good research practice, Hangel and Schickore (2017) argue that especially the reporting of method often remains notoriously obscure. They also show that transparency of primary material is often obfuscated, for example by working with numerical codes that nobody can decipher.
Regarding transparency, the answers to the questions of action, value, knowledge and accountability are ambivalent. It makes an individual accountable, but forces to give up any competitive edge related to knowledge ownership, which is a particular way of valuing. Also, the elegant presentation of e.g. methodology is a skill that requires training, which likely comes with mentorship and jargon, and the membership that is constructed through those. Transparency is thus ambiguous, and it is crucial here that this ambiguity cannot be resolved by clearer (institutional) rules, nor by (individual) moral deliberation, which thus makes the accountability in fact ambiguous. Thus, even if transparency appears sanctionable, it depends on the practice context how it unfolds exactly.
Human Dignity
Perhaps the most ambiguous value in the category of sanctionable values is that of human dignity. On the one hand, it emerges as strongly sanctionable, from historical failures such as the Tuskegee experiment (Brandt 1978; Daugherty-Brownrigg 2013) and the atrocities of research in Nazi concentration camps (Baumschlag 2005). At the same time, standing definitions do not help us very much. For example, Drenth (2006, p. 17) defines dignity as the safeguarding of all individuals’ autonomy and freedom of choice, which in the case of participation to research is chiefly shaped as informed consent, and the rejection of every intent to commercialise the human body. Similarly, Spier (2006, p. 191) defines it as the avoidance of any intended negative effects on the environment and society, both for current and future generations. In a general sense, dignity has been observed to be a term that is utterly vague, and usually captured to defend very particular interests (Macklin 2003; Pinker 2008).
Hence, in addition to the aforementioned procedural implementations, it seems that dignity importantly remains a matter of “good intuition”. While this may be more open to individual moral insight, compared to for example transparency, it is also a matter of how the Tuskegee and Nazi stories circulate in courses and mentorship relations. Thus, this is a matter of how people “know” things, including knowing in a particular way how their research relates to the obvious atrocities. Also, the translation of these stories to concrete decisions on the work floor is dependent on the “doing” and “valuing” at specific times and places, in ways that cannot be reduced to institutional rules nor individual qualities.
From Sanction to Practice
Even though we started the present set of examples as a tentative list of sanctionable values, in all cases there are sides to them that are not resolved by sanctioning or other institutional arrangements. The realization of these values depends on how routines circulate, how actions are valued, how responsibilities are distributed between people, and between people and the institution. It also, in some cases, depends on the practice-based skills with respect to research devices as well as (working around) plagiarism checks. In all the values listed here, we see that the responsibility for their realization is not reducible to either the individual person, or the institution.
Aspirational Values
Starting from the assumed sanctionability of the values above, we observed that there are in fact more cultural and practice-related aspects to them then might be suggested by their initial appearance as sanctionable and the according institutional responsibility to secure them. What does it look like if we start from the other end of the spectrum, i.e. values that appear as aspirational and hence connected to individual responsibility? One could think of honesty, scrupulousness, independence and responsibility (KNAW 2018). These are said to be less successfully conveyed in RI courses (Plemmons et al. 2006). Shamoo and Resnik (2015, p. 283) argue that beyond avoiding harm, scientific research should be aimed at furthering the public good and public knowledge. They conclude that little substantiation has been given thereto so far, which we take to be a hint at their substantiation taking place in practice. Following this line of thought, we discuss four such aspirational values and how this substantiation can be understood.
Integrity
It may appear circular to discuss “integrity” as a constituting value if it is also the overarching goal. Clearly, the sanctionable values above are part of it. Nonetheless, notions of integrity proper do circulate in much the same way as aspirational values do. For example, Becker (1998, p. 157) as quoted in Breit and Forsberg (2016, p. 15), understands integrity as “the principle of being principled, practicing what one preaches regardless of emotional or social pressure, and not allowing any irrational consideration to overwhelm one’s rational convictions”. A lack of integrity (ibid.) consists of lack of principles; lack of consistency in moral principles; and behaviour influenced by social pressures. In other words, integrity is the capacity to act in accordance with moral principles, but those moral principles themselves are not further substantiated, or at least not within this definition.
The substantiation that such openness calls for is by no means essentially the responsibility of the individual, the institution, nor essentially the product of culture and practice. Rather, it will be a combination of those, and the balance may be tipped differently in different cases. Nevertheless, discussing the value of integrity here is instructive: it offers a clear example where limiting the analysis to individuals and institutions would overlook the importance of how knowing, doing and valuing are predisposed in practice.
Inquisitiveness
Many sources mention inquisitiveness and curiosity as primary virtues for scientists (Shamoo and Resnik 2015; Drenth 2006; Gläser et al. 2002). At face value, this appears as an predominantly personal trait. Yet, Shamoo and Resnik (2015, p. 61) argue that choice of research topics, so what exactly the scientist practices curiosity on, is inextricably tied up with the resources that are available for doing research. This renders them ambiguous as a personal responsibility: it is equally the institution’s responsibility to provide resources. Thus, institutional responsibilities clearly extend beyond the prevention of problematic behaviour. Also, this choice depends upon the research objects that are available. These objects thus become at once explanans and explanandum, given the effort that goes into constructing those objects in the first place (Knorr Cetina 1999).
The contextual character of inquisitiveness becomes even clearer if we think of what it takes to develop oneself as an inquisitive researcher: not only should the institutional atmosphere in some way be conducive to that, it also requires that one is trained into recognizing the interesting scientific challenges. What is more, curiosity can only persist if there is a legitimacy to trying out possible dead ends and failures. It has in fact been demonstrated that current levels of competition and the pressure of acquiring scarce resources lead researchers to avoiding such risks (Moore et al. 2017). Thus, the realization of the value of inquisitiveness is dependent on infrastructures such as funding and research agendas that enable it, but also on how the local practice allows and even values failure. The extent to which a researcher is free to be inquisitive, depends on the hierarchical position one is placed in, and how such hierarchies work in a specific practice. And what is valued as an interesting research problem is similarly inscribed not in the rules, but rather in the unwritten value schemes that circulate in the practice. To see inquisitiveness merely as aspirational would be to disregard this complexity. And to explain this contextual complexity, it is not enough to only look at the institutional arrangements.
Reflexivity
Consoli (2006) argues that scientists should have reflexivity, or the capacity to think about their own work from an external perspective, in view of the broader context to which their work connects. This reflexivity is needed to be able to deal with the moral complexities that research work inevitably comes with. To a large extent, along the lines of Consoli’s discussion, the moral thinking that reflexivity requires can be delivered by an individual person. Nonetheless, it is also self-evident that moral thinking can be supported by training as well as peer-discussion, and both depend on what is done and not done in the direct research environment, and how such critique is valued. Are the customs of the practice such that there is space—in terms of time and place, but also psychological safety—to conduct such reflection? Are the meanings that circulate in the practice sufficiently open-ended to make engagement sensible, or are they rather fixed and hostile to reflection? These are clearly questions of collectiveness, practice and culture, not (merely) individual or institutional matters.
Collegiality and Trust
The need for a good collegial context and the duty to preserve that context is often mentioned. In fact, this is exactly one of the guises in which the unspecific notions of culture often appear from which this analysis started. This lack of specificity may contribute to the seemingly self-evident appearance as not-institutional and hence aspirational, but such a conclusion cannot be drawn before looking in more detail to the constituting values.
One thing that its slightly more specific is the value of trust. The German Research Foundation DFG (1998) emphasizes the need for trust in the relations scientists build within their community, where building trust consists of maintaining clear and transparent procedures, accuracy in attribution and citation, and accessibility of securing facilities such as counselling and report. It also posits trust to be a necessary condition for any self-regulation of science to emerge. Yet, in contrast, Stroebe et al. (2012) have argued that such self-regulation, chiefly based on principles of peer-review and replication, are insufficient to prevent fraud, and have indeed failed so in notorious cases.
Remarkably, both understandings are elaborated as more or less “manageable” issues, i.e., through procedures. Alternatively, in view of our discussion above of the concepts of practices and culture, trust could be seen as a relation between persons and groups of persons, that consists of the belief that the other party in that relation is truthful and well-meaning. The extent to which such belief can emerge, depends on how people behave in daily practice, the narratives they repeat about what they think is important, and the responsibilities they avow to take. In some contexts, trust will primarily be conferred to one’s equals, and in other contexts more along hierarchical lines up and down. Or it may in some contexts more than others be connected to merit and the credit one person has acquired with the other.
One specific guise in which collegiality appears is in the duty of peer review. It is mentioned widely as a core aspect of preserving the quality of scientific knowledge (Spier 2006; Hangel and Schickore 2017). In order to contribute to the progress of science, peer review should be done in a critical but fair and constructive way. Ripley et al. (2012) argue that teaching peer review is generally recognized as an important element of mentorship. Interestingly, they also argue that such mentorship could do with further training support for the mentors. Several sources (Bohannon 2013; Ioannidis 2005) show that peer review in practice drops the ball quite often, and fails to single out all instances of bad science. It is also biased, notably against interdisciplinarity and against diversity and inclusion (Rafols et al. 2012; Moore et al. 2017, p. 3).
A problem such as the bias against interdisciplinary research can only be understood as defects of the research culture: along the lines discussed, it reproduces itself independently of both the positions of single researchers and institutional rules. Trying to resolve this through further rules and regulations seems futile, and also it seems not a matter of individual peer reviewers having bad intentions. Rather, it requires active reflections on how things are done and valued, and how responsibility is distributed.
From Aspiration to Culture
We started from the working hypothesis that aspirational values are more open to interpretation and more difficult to manage than sanctionable values, and therefore more likely to end up as individual responsibilities. However, in the exemplary values discussed here, it becomes clear that this attribution of responsibility is again complex, and by no means maps onto the individual-institution dichotomy. For their substantiation, the aspirational values are dependent on the practice and how people act, know and value within it. At the same time, it appears that this dependency is less clear than with the sanctionable values, and the elements related to culture and practice are less tangible.