Keywords

1 Introduction

1.1 Worst Case Scenario

In August 1971, Stanford University psychology professor Philip Zimbardo was running just the first week of his grand ‘Prison Experiment’, and participants were already behaving according to the roles assigned to them. The twelve ‘prisoners’ were becoming passive, subordinate, almost inert, while the twelve ‘guards’ were behaving more and more like bullies.

The prisoners and guards, who were all male students at Stanford University and voluntary participants in the experiment, were assigned their respective roles randomly at the offset. Zimbordo’s goal was to research the psychological effects of perceived power. In particular, he was interested in ‘deindividualization’ and ‘dehumanization’ (loss of personhood).

There was a major problem developing: the experiment was quickly getting out of hand. The ‘guards’ began to behave callously, and the ‘prisoners’ were beginning to actually suffer. After some deliberation, Zimbardo decided to discontinue the experiment.

The Stanford Prison Experiment (SPE) has been the subject of heated debate to this day (see Haslam and Reicher 2012; Haslam et al. 2019). In a reanalysis of the original data, Le Textier (2018) found that Zimbardo’s narrative of the experiment was flawed in a number of respects. Le Texier found that the data presented by Zimbardo was incomplete and biased towards dramatization and did not disclose that the guards acted on precise instructions from Zimbardo, whose ‘experiment’ seemed to be designed more as a demonstration than a scientific study. Bartels and Griggs (2019) concluded from these criticisms that textbooks should revise and repurpose the coverage of the Stanford Prison Experiment. Textbook authors should use the SPE as a case to teach students the importance of critical thinking, and the value of self-correction in science.

Indeed, the educational value of historical cases such as these lies less in their isolated ethical and methodological shortcomings, of which Zimbardo seemed acutely aware. Rather, their value resides in understanding how our moral beliefs have changed over time. What are our ethical presumptions, and by which norms do we live?

Getting an answer to these questions is the purpose of this chapter. In order to answer them, we need to begin at what ethics are and why they are important. To do this, we will explore the various approaches to ethics.

1.2 What Are Ethics?

Peter Singer (2001), professor of bio-ethics at Princeton University, notes that people often like to believe that ethics are just ‘an annoying list of things you are not allowed to do, so you can’t have fun.’ Rather, he says, that’s not the case. Ethics are an inquiry into what is right and wrong, and what is valuable and important. It attempts to answer the question of what you ought to do.

When performing research, you are inevitably going to make decisions that will affect others, and you need to know which of the available options is the best course of action. Which one to choose is not always immediately clear. Here are a few examples:

  • Suppose you want to know a respondent’s view on a particular subject, but directly asking about it would likely influence the respondent. Is it therefore acceptable to mislead the respondents so the information you receive is more valid? Or should you be honest and open about your intentions, which would require fully informing the participant and possibly affecting your data?

  • Suppose you want to investigate certain behaviors, and to do so, your respondents need to perform a task that carries a small risk about which the respondent is fully informed. Do you still have a responsibility for the safety and well-being of the respondents even if they are fully informed about the risks? If so, how far does that responsibility reach? Does it end with the experiment or should you provide care afterwards?

  • Suppose you have made a discovery that could benefit some, but harm others. Should you publish the results or not? How should you reach your decision and on what criteria?

The answers to many of these questions have already been formulated, either in the form of general principles to be followed (‘codes of conduct’), or in the form of very specific rules that apply to certain situations or conditions (‘do’s and don’ts’), to which we turn later in this chapter.

However, the strictest of rules leave room for interpretation, and even in the most clear-cut cases, there may be more than one solution available.

Thus, knowing what to do requires a degree of principled sensitivity, meaning researchers should be sensitive to the rights of others and their well-being. Many argue, furthermore, that this sensitivity does not stop at individuals; it stretches into communities, animals, and even the environment at large.

1.3 Three Cases

Before we go into detail on the ethical approaches in research, let us first examine three concrete examples that will allow us to appreciate the various dimensions of ethics in a broader sense.

First, consider the horrendous hypothermia experiments carried out by Nazi doctors on prisoners during the Second World War. Prisoners were strapped naked to a stretcher in the Polish winter or immersed in ice-cold water while data on their bodily response was meticulously collected. The Nazis used this data to determine how much cold a human body could endure, arguing that it would come in handy on the Eastern Front (Berger 1992).

About a third of the prisoners did not survive the experiments. Post-war abhorrence for these types of experiments led to the ‘Nuremberg Code of Ethics’, on which our present-day ethical codes are based.

The question here is obviously not whether there is an ethical dilemma about the experiment’s procedures. The question is: could the data still be used? This question has been subject of an ongoing post-war debate (Schafer 1998). In a discussion on this issue, David Bogod (2004, p. 1156) contends that unethically acquired data should never be used. That being said, he does acknowledge that others would argue, ‘if some general good can come of the most evil acts, then those who suffered and died might not have done so entirely in vain.’ However, this quickly leads to a follow-up question: when you honor that argument, aren’t you assuming that the subjects are providing a posthumous consent? The very first principle of the Nuremberg Code outlines that ‘voluntary consent of the human subject is absolutely essential.’

Second, consider the well-known 1961 obedience studies conducted by Yale psychologist Stanley Milgram. Milgram (1963) led his subjects to believe they were taking part in an experiment on learning. In their role as a ‘teacher’, the participants were to administer electric shocks to a fellow participant, the ‘learner.’ The shocks supposedly increased in severity over the course of the experiment, and the response from the learner became ever more volatile. What the subject didn’t know was that in reality, the learner was a stooge, an actor hired by Milgram to feign pain, never actually receiving the electric shocks.

Milgram rationalized ‘obedience’ as the willingness to ‘carry out another person’s wishes.’ He wanted to know whether his research participants would continue participating in his study, and at what point would they decide that their collaboration was no longer justifiable. However, when his subjects voiced doubts and proposed to stop, the researcher answered with a line from a script: ‘The experiment requires that you continue…’ (see Perry 2013) (Figs. 3.1 and 3.2).

Fig. 3.1
A photograph of 2 men sitting perpendicular to each other in front of a desk, where one yawns.

The Milgram Experiments. Subject in the study (c) Yale University Manuscripts and Archives

Fig. 3.2
A photograph of an old paper. It reads, public announcement, we will pay you 4 dollars for one hour of your time, persons needed for a study of memory. It contains rows of personal particulars to be filled, at the bottom.

The Milgram Experiment. Left: Advertisement in the New Haven Register June 18, 1961. Right: Subject in the study © Yale University Manuscripts and Archives

Milgram’s work has been widely criticized, especially on ethical grounds. The controversy even affected his professional career (he was denied tenure at Harvard, see Miller 1986). Did his subjects know what they were in for? Had they been subjected to a tolerable level of pressure? Were they properly debriefed afterwards and was there any form of care offered if needed? The answer to most of these questions is ‘no.’ For that reason, the Milgram studies, even more so than the Stanford Prison Experiment, stand out as a landmark of unethical research.

Our third example questions research on the origin of sexual orientation, specifically homosexuality or ‘same-sex sexual orientation.’ Homosexuality has long been, and in certain societies still is, illegal. This is justified because it was (or still is) considered an aberration of the norm (Greenberg 1988).

Research into the ‘origin’ or ‘cause’ of homosexuality has by and large adopted this perspective of ‘abnormality.’ Thus, psychoanalytic theorists proposed that homosexuality may be caused by ‘arrested psychosexual development,’ often in the context of a dysfunctional family constellation. Forms of psychotherapy, it was reasoned, ought to be based on the idea that it could and should be possible to shift the sexual orientation back to ‘normal’ (Halderman 1994).

With the gay rights movement of the 1960s and 70s, the official view changed. Homosexuality would no longer be considered an abnormality. Interestingly, researchers continued to search for the cause of homosexuality, though now in (socio)biological terms (notably genetic, hormonal, and environmental influences). This suggested that at its base, homosexuality was still something ‘unnatural’ and in need of an explanation. The underlying ethical consideration here is voiced by Schüklenk et al. (1997, p. 10): ‘Why is there a dispute as to whether homosexuality is natural or normal? We suggest it is because many people seem to think that nature has a prescriptive normative force such that what is deemed natural is necessarily good and therefore ought to be.’

From these brief cases we extract two provisional observations:

First, ethicsin science is a complex and multifaceted issue. It is not just about setting up proper research protocols or treating subjects respectfully (though that is certainly important). It is also about the types of questions asked, the (implicit) presuppositions made, and the ways data is analyzed and communicated. This includes the impact scientific research may have on society and the responsibilities researchers have towards individuals and communities.

Second, norms and values are not fixed objects. Although there is universal consensus on certain basic values (‘do not harm’, for example), sensitivity to other values may change over time and can differ from society to society. For example, we have witnessed over the past several decades an increased concern with data manipulation and data storage, while the emergence of the internet has given rise to new questions regarding confidentiality. These concerns have led to stricter regulations in many countries. However, value differences mean the policies deployed to handle these considerations differ between China, the US, and many European countries, and this may complicate data sharing in the future (Box 3.1).

Box 3.1: The Ethics of Eating Disorder Research

The case detailed below, borrowed from Wassenaar and Mamotte (2012), questions the ethics of an eating disorder study conducted by a researcher from South Africa. The South African researcher in question was interested in establishing the cross-cultural validity of the Eating Disorder Inventory (EDI), developed and standardized in the United States.

The EDI consists of 64 questions, clustered into eight subscales, that all relate to the psychological conditions of anorexia nervosa and bulimia, seeking to determine the degree of their disorder. They sought to answer: does the EDI work in ‘developing countries’ as well?

To probe this question, the researcher aimed to have some 500 female university students fill out the EDI. They then sampled high and low scoring participants who would be subsequently interviewed by skilled clinicians on a blind basis (neither the participant nor the interviewer would know the participants’ EDI score).

The aim of the study was to determine whether high and low EDI scores correlated with the clinician’s estimate of the severity of the participant’s eating disorder. Participants would be compensated with ‘study credits’ (points awarded to students for participating in research).

Though the wish to establish the cross-cultural validity of the EDI is legitimate, Wassenaar and Mamotte voiced several ethical concerns with this research project, three of which are highlighted below:

  • First, the community in which the research was conducted (university campus) had not been informed of nor approved of the study. Research must respect community cultures and values, and therefore the researcher should have formed collaborative partnerships with women’s health groups on campus or with class representatives before undertaking the study.

  • Second, the participants had not been fully informed about the risks involved. Indeed, research on emotionally and socially sensitive topics like eating disorders can potentially induce harm, such as anxiety, painful self-discoveries, stress, indignation, and secondary traumatization. The ethical review board, whose task it is to oversee these potential dangers and who had approved of the research, underestimated the potential for emotional distress.

  • Third, there is the issue of dependency. Students are in an unequal power relationship with teachers and/or researchers. They may feel that a failure to participate can lead to disapproval, or that they are not free to withdraw their collaboration. Wassenaar and Mamotte suggest that a structured assessment instrument be used to evaluate the participant’s ability to consent to research and understand the voluntary nature of their decision to participate.

In conclusion, Wassenaar and Mamotte determined that there is a ‘need for special ethics scrutiny of mental health related research proposals involving students as research participants.’

Do you agree with this conclusion? Or do you believe that students do not differ from any other research population, and thus need no special ‘ethics scrutiny’ when they are involved in health-related research? Why or why not?

2 Conceptualizing Research Ethics

2.1 Responsible Research Conduct

From our previous discussions, we have learned that some researchers ‘play by the book’; they follow procedures and aim to make the right decisions based upon conventional wisdom. Others, however, break from these conventions and make the wrong decisions; and they risk being accused of fraud. Following Steneck (2006), we will call the research practices of the former ‘ideal,’ and those of the latter ‘deplorable.’

Ideal research behavior takes into account existing norms, institutional standards, and international legislation. Deplorable research behavior violates these norms deliberately. These practices come in the form of Plagiarism, Falsification, and Fabrication, or PFF (see Chaps. 4, 5 and 6 in this book).

It is assumed that deplorable research behavior is rarer than ideal behaviors, but there are more forms of research behavior than just ideal and deplorable. In between these extremes there exists a rather large ‘grey area,’ or behaviors that are neither ideal nor deplorable, but rather ‘questionable.’ Questionable Research Practices (QRPs) violate the established norms, but not enough to qualify as ‘fraud.’

In later chapters we explore questionable practices in greater detail. For now, we observe that these practices confront us with a challenge: we need to answer the question where to draw the line. What do we find acceptable, what do we reject, and why?

2.2 Research Ethics and Integrity

How do we distinguish between ideal and deplorable research practices? To answer this, we need to acknowledge another distinction, namely between research ethics and integrity.

First consider integrity, which we understand to be the quality of having strong moral principles, like honesty or compassion. Macrina (2005, p. 1) notes how this term raises an ‘image of wholeness and soundness, even perfection.’ Integrity is often used as an adjective, mainly to describe one’s behavior as being integrous. Accordingly, research integrity can be defined as ‘the quality of possessing and steadfastly adhering to high moral principles and professional standards, as outlined by professional organizations, research institutions and […] the government and public’ (quoted in Steneck 2006, p. 55). In research, questions of integrity often relate to methodological and procedural issues.

Next consider ethics. Compared to integrity, ethics has to do with moral principles and questions of fairness and even justice. In research ethics, we are concerned with moral problems related to the practice of research involving living participants (animals as well as humans, individuals as well as groups, and even entire societies). The focus here is more on protecting participants, ensuring their interests and rights, and on assessing risks and protecting confidentiality, among other issues.

Though both concepts emphasize different aspects of normative behaviors, what they have in common is:

  • some concept of (contested) normative rules;

  • some notion of communality;

  • some sense of (individual and collective) moral responsibility;

  • and some connection to behavior.

In this book, we investigate the dimensions of responsible research conduct; both the procedures and the principles, the abstract norms, and the concrete behaviors (Fig. 3.3).

Fig. 3.3
An illustration of the faces of a man and a woman. Both face each other.

Some idea of moral responsibility

2.3 Research Ethics and Professional Ethics

A final distinction that needs a brief exposition is that between research ethics and professional ethics.

Researchethics has to do with norms, values, and practices concerning the collection, analysis, and dissemination of scientific findings about the world. Professional ethics has to do with norms, values, and behaviors concerning the work of a practitioner (i.e. a therapist, counselor, educator, policy maker, etc.) who intervenes in the world.

As a researcher, you are concerned with asking relevant questions, using validated methods, obtaining reliable data, and drawing logical conclusions. As a practitioner, you are concerned with making correct diagnoses, finding effective treatments, and measuring the effectiveness of intervention (among others).

Both roles involve different sets of (normative, or standard) rules and principles that outline desirable and undesirable behaviors. But this sharp division of labor does not always represent reality. Here are two examples where a strict interpretation of a researcher or practitioner’s responsibility becomes problematic.

First, consider that researchers should be, by default, committed to the principle of anonymity, which means that data cannot be traced back to any individual. Then what should be done with unexpected findings that could be of great importance to the participant? Say, for example, that a researcher uses fMRI scans to investigate certain brain activities and by accident finds that one of the participants may have developed a tumor. Should the researcher break from the normative rules of research ethics like anonymity and assume some form of professional ethics in order to refer the participant to a specialist? Doing so would lead them to intervene in the world, as if they were a practitioner. Is this acceptable? Similar questions can be raised when there is suspicion of child abuse or marital violence (these questions on confidentiality are addressed in Chap. 7).

Second, consider the responsibilities of a researcher who collaborates with a third party (for example a governmental body, a professional organization, or a special interest group). While on one hand the researcher needs to maintain scientific objectivity, they may also need to take into account some of the concerns specific to that field or those of a specific organization, which can lead to conflicts of interest (this will be addressed in Chap. 8).

While this book will focus on research ethics time and again, we will find that cases overlap with aspects of professional ethics, and that in our deliberations, we cannot rely solely on research procedures to make the right call.

3 Codes of Conduct

3.1 Guiding Principles

Ethics (in a prescriptive sense) is reflection on what actions or behavior might be justified. This reflection can and often does result in normative rules or principles. Nevertheless, there is not one specific set of well-defined rules that specifies exactly which behavior qualifies as ethical.

First, a list of rules covering all possible ethical decisions across every possible situation would be endless.

Second, even if we had such a list, real life situations are complex and ambiguous, and can rarely be governed under just one principle or one rule.

Fortunately, there are guiding principles that can help us navigate our way through normative issues. These guiding principles are called ‘codes of conduct.’

Today, all universities require that its members (staff as well as students) adhere to such a code of conduct, often modelled after similar codes first introduced in the medical professions.

Codes of conduct can differ from discipline to discipline and even from culture to culture, though all share a number of notably important principles (see Box 3.2 for a list of shared values often found in academic codes of conduct).

In Europe, universities have largely committed to the European Code of Conduct for Research Integrity (3rd revised edition, published in 2017). In just a few pages, the European Code of Conduct outlines the four most basic principles researchers should adhere to: reliability,honesty, respect, and accountability. It also outlines a general sense of ‘good research practices’ regarding (among other things) training, supervision, and mentoring of researchers, as well as how to establish sound research procedures.

Many European countries have created their own codes of conduct on top of the this code, defining in somewhat greater detail what is required of their researchers. For example, every university in the Netherlands has accepted the Netherlands Code of Conduct for Scientific Integrity (last updated in 2018), a 30 page document that details its founding principles, specifies the norms of ‘good scientific research practices,’ and stipulates the universities’ obligations with respect to issues such as training, supervision, data management, and procedures regarding scientific misconduct. Most universities, though not all, have created a special student version of their code of conduct designed to outline proper behavior in class and on campus.

Promoting ethical policies and codes of conduct has become a major task of specialized bodies within institutes (see Iverson et al. 2003). In compliance with international regulations, most universities have established special Institutional Review Boards (IRBs) to safeguard ‘research ethics,’ about which we write in greater detail in Chap. 10. Furthermore, they have assigned ombudspersons (intermediaries between an institution and the government) and boards of complaints designed to safeguard ‘research integrity.’

Often these specialized bodies have statutory and disciplinary powers, meaning they can and will take action in case ethical integrity has been violated (rights of participants, or research subjects). Failing to comply with a code of conduct may indeed carry disciplinary action against the offender, differing from an official slap on the wrist or being put on probation, to even discharge from one’s position (staff member) or removal from the institute (student). Note how ethical consideration thus carry legal consequences. Although this aspect of research ethics is not elaborated upon further in this book, it is an important reality to carry with you. In the last chapter of this book though, we will return to the task of IRBs.

Box 3.2: Shared Values in Scientific Research

  • Accountability: Be reliable and responsible with your research, from idea to publication.

  • Animal Care: Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.

  • Carefulness: Try to avoid careless errors and negligence. Keep good records of your research activities, research design, and correspondence with agencies or journals.

  • Competence: Maintain and improve your own professional competence and expertise through lifelong education and learning. Take steps to promote competence in science as a whole.

  • Confidentiality: Do not disclose the personal information of research subjects, nor their identities. Protect sensitive information.

  • Honesty: Convey information truthfully. Honor commitments. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, research sponsors, or the public.

  • Human Subjects Protection: When conducting research on human subjects, minimize harm and risks, and maximize benefits. Respect human dignity, privacy, and autonomy. Take special precautions with vulnerable populations. Strive to distribute the benefits and burdens of research fairly.

  • Legality: Know and obey relevant laws, institutional codes of conduct, and governmental policies.

  • Non-Discrimination: Avoid discrimination of anyone on the basis of sex, race, ethnicity, or other factors not related to scientific competence and integrity.

  • Objectivity: Strive to be impartial and avoid bias and self-deception. Disclose personal or financial interests that may affect your research practice.

  • Openness: Share your data, results, ideas, tools, and resources. Be open to criticism and new ideas.

  • Respect for Intellectual Property: Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give proper acknowledgement when credit is due.

  • Responsible Publication: Publish in order to advance research and scholarship, not only to advance your own career. Avoid wasteful and duplicative publication.

  • SocialResponsibility: Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.

[Adapted from Shamoo A. and Resnik, D. (2015). Responsible Conduct of Research, 3rd ed. New York: Oxford University Press.]

3.2 Key Imperatives

Codes of conduct generally outline the subjects or issues that we should pay attention to. They do not give us precise guidelines, as the task of ethics is not to specify exactly which behaviors are desirable and which are not. However, there are a few exceptions – certain behaviors that the scientific community agrees are desirable (and where the opposite behavior is undesirable). These are called the imperatives of science. An imperative is a rule or principle considered to be crucial or decisive. It tells you where to draw the line. In the social sciences (and indeed in science more generally), the following imperatives have been universally accepted as fundamental to the practice of research and can be found referenced in any textbook on ethics:

  • Avoid harm and do good. Researchers have an obligation to improve, promote, and protect the health of people and their communities. They must furthermore seek to avoid any harm done to human participants, or to animals, and must seek to minimize the risk thereof.

  • Respect for persons. Researchers must protect the autonomy of research participants. This imperative implies recognition of persons as autonomous, unique, and free subjects. It also means that researchers acknowledge that each person has the right and capacity to make their own decisions, including the right of non-participation.

  • Protect confidentiality. Participants must be sure that their data is processed anonymously (unless there is a reason not to, and the participant is notified thereof). No participant should suffer consequences from having participated in any research because certain personal information is made public.

  • Avoid deception. Participants may not be deceived, misinformed, or misled by researchers (unless there is good reason to, and the deception is debriefed afterwards). In line with the imperative of autonomy, human research participants should be considered capable of deciding whether they consent and to what it is they are consenting (Box 3.3).

Box 3.3: Bothersome Research: A Dilemma

You are conducting clinical research for which you need a lot of patients. Some of the patients are very ill and it becomes clear they would prefer to not participate in the research at all. You respect this and conduct the research with healthier participants. After all, you deduce, there is a certain amount of stress involved without evidence of benefits. A couple of days later, you receive an email from your professor in which he makes it clear that you are behind schedule and should collect the data of at least ten new patients before the end of the week. This would mean that you must include the very ill patients, despite their wish to not be included. Things are not going well with your professor because you failed to come up with any significant results in your last research project. What do you do?

  1. (a)

    Go back and thoroughly explain the importance of the research to the patients, asking again if they would participate.

  2. (b)

    Explain the patients’ situation to the professor and emphasize that they do have the right to refuse to participate.

  3. (c)

    Ask the professor to extend the period of data collection so you have time to search for other patients. You know he will not be pleased with the request.

  4. (d)

    Discuss the issue with the medical personnel, and request that they ask the patients again on your behalf.

[Case adapted with permission from Dilemma Game: Professionalism andIntegrity, Erasmus University Rotterdam].

4 Fundamental Dilemmas and Ethical Theories

4.1 The Need for Ethical Reflection

Thus far, we have discussed normative rules and moral principles, some shared values and ethical norms, and various imperatives that are intended to guide scientific research. But how should we understand the role of these norms, rules, and principles? Are they set in stone? Where do they derive from? What if they clash or fail to give adequate guidance?

For a start, it is important to recognize that even in a discipline that is governed by widely accepted norms and principles regarding research, researchers may find themselves confronted with ethical dilemmas. What, for instance, about the imperative to avoid harm and do good? These are in fact two imperatives that in certain cases may prove mutually exclusive. Sometimes, it seems, some (potential) harm must be done to a research subject in order to do good overall.

For example, an intensive care unit at an academic hospital may drastically improve the quality of its care and the survival rate of its patients by gathering extensive data from critically ill patients. It is not, however, able to ask permission from these patients and therefore gathering and using the data violates the usual norms and guidelines regulating consent. Or what about a government-run statistical service, that may, by gathering traffic data, be able to help improve the efficiency and safety of the transport infrastructure. Here, too, the people whose data is gathered could not possibly be asked for consent, which the statistical service’s code of conduct requires. Does the (potential) good achieved in these cases outweigh the harm done by gathering and using people’s data without consent?

Even more complex questions may arise. What, for instance, should we consider harm in the first place? Being manipulated to make a different choice than one would have otherwise made goes against autonomy, which many consider an important value. But what if this alternative option is objectively better for the research subject concerned, for instance because it is healthier or more cost-effective? Does the manipulation still count as harm? Or is it actually an example of doing good?

As these brief examples show, even when clear norms and principles are available for a research field, ethical questions still arise. Moreover, beyond ethical dilemmas stemming from incompatible or insufficiently clear guidelines, ethical reflection on norms and principles is important because we cannot blindly assume that all existing guidelines will always remain, or indeed are currently, ethically justified. Standards from the past have been revised in the light of new insights or developments and there is no reason to think that our current standards will suffice for the decades to come.

For all of these reasons, it will be important for any researcher to be able to connect critically to the norms and principles of their own discipline. But where can someone looking for ethical guidance beyond established norms turn? This question pushes us deeper into the domain of ethics in the sense of reflection on what actions might be justified. Obviously, the scope of this chapter does not allow for an in-depth discussion on ethical theories. However, a first sense of some of the approaches that inform research codes of conduct, norms, and principles might be useful.

4.2 Deontology Versus Consequentialism

Many principles in research codes of conduct follow (in terms of ethical theory) a deontological approach. In order to understand what this approach entails, it is helpful to contrast it with its main rival: consequentialism. According to consequentialists, we must judge rules or particular actions by the specific consequences of these rules or actions. Many consequentialists in addition hold that the way in which we ought to judge consequences is by the question of how much good (or well-being) the action or rule would produce for all involved parties combined. In other words: the best (or even the only justifiable) action or rule would be the one that results in the most good overall. A consequentialist might therefore, to give an extreme example, judge that the right thing to do is to sacrifice one person in order to save or help many (assuming that this sacrifice would indeed result in the most good overall).

In contrast, one of the core convictions of deontologists is that there are certain things we must always or may never do, regardless of whether deviating from these norms might have the best result in a particular situation. This conviction is something that is quite easily recognizable in many of the norms, rules, and imperatives discussed above. Informed consent must always be obtained, regardless of whether your results might be better if you didn’t. Research subjects must always be debriefed, even if this makes it harder to do a second run of the same experiment. You may never break confidentiality, even if disclosing the personal data of your research participants to a third party would enable you to generate extremely interesting results.

The use of ‘always’ and ‘never’ conveys a close alignment with the deontological approach. You might think, however, that the fact that research codes of conduct are (superficially) deontological in character ultimately is (or should be) the result of a consequentialist kind of reasoning: if every individual researcher were to set their own rules based on their own judgement, this would lead to a mess, public distrust in science, or other unfavorable results. The conviction that a discipline ought to abide by strict norms and imperatives for its research conduct can therefore be motivated by either deontological or consequentialist approaches.

Furthermore, you might think that strict adherence to a set of imperatives is neither helpful nor desirable. You may, for instance, wonder whether it may sometimes be right to forgo informed consent if the expected consequences of doing so promise to be very good and the harm seems relatively small (such as in the example of the intensive care unit above). This is the sort of reflection that can be helped by a better grounding in ethical theory.

Both consequentialism and deontology have a wide following amongst ethicists. Important to remember for the purposes of this chapter is that there is not only significant discussion on whether our ethical decisions should follow deontological or consequentialist principles, but also on which actions these sets of principles would call for. Should deceiving a research participant in all forms and circumstances be prohibited, according to deontology? It’s an open question. Should a consequentialist accept medical interventions on a test subject against their will if this is very likely to improve the life expectancy of many others? It’s debatable. These theories are helpful instruments in considering such questions, but they do not settle the questions definitively in any simple way.

4.3 Virtue Ethics

Finally, it is worth looking briefly at a third approach, known as virtue ethics. Virtue ethicists offer a different perspective on research ethics, for they might say that instead of relying too much on codes of conduct and lists of rules, it is important for researchers to cultivate relevant virtues such as honesty, reliability, humility, and conscientiousness. There are different reasons for this, but a central one is that, as we have seen, norms and imperatives can clash and often do not determine exactly what is required in any given situation. For this reason, virtue ethics places a strong emphasis on judgement and so-called ‘practical wisdom,’ which allows one to decide exactly what action the set of virtues calls for in a specific context. So, unlike a researcher merely abiding by lists of rules who is stumped whenever that list does not suggest a clear and definitive course of action, a virtuous and practically wise researcher would be able to judge what their virtues require of them in specific situations. The downside of this approach is that cultivating such virtues and the practical wisdom to apply them is not an easy thing to do – it is a long and difficult process that involves a lot of practice. Virtue ethicists correspondingly would attach great importance to education in these virtues and good examples being set by other members of the profession.

4.4 Ready-made Solutions?

From our discussion, it has become clear that ethical questions may arise in any research context and that codes of conduct, lists of values, or imperatives do not provide any ready-made answers. While ethical theories can help to reflect on the arguments and considerations underlying possible courses of action, they do not solve the ethical questions all by themselves. Codes, rules, and procedures provide us with indications, and in some cases strong indications of what to do. However, at the end of the day, there rests a moral responsibility on the shoulders of the researcher to justify their actions (which choices they made and on which grounds) and explain them to others (other researchers, participants, the community). This task becomes increasingly more important when society demands accountability.

5 Conclusions

5.1 Summary

In this chapter, we have familiarized ourselves with research ethics in a general sense. We have come to understand it as the application of normative rules or principles, such that you know how to behave in a responsible way. We distinguished between ‘ideal’ and ‘deplorable’ practices in research and differentiated between research ethics proper (which applies to norms, specifically with regard to working with living participants) and research integrity (which is defined as maintaining high moral principles and professional standards).

We have learned, furthermore, that research ethics require sensitivity and responsiveness on the part of the researcher, who must be wary that the ethical dimensions of their work can be highly contested.

We have also established that ethics implies an obligation on the part of the academic community to provide the guiding principles and institutional imperatives that allow research to take place at all. These guiding principles translate into specific codes of conduct that aim to formalize desirable behavior and prevent undesirable behavior.

Finally, we discussed the unavoidability of reflecting on ethical questions surrounding research conduct. We established that neither codes of conduct, disciplinary norms, or broadly shared imperatives, nor fundamental ethical theories provide ready-made answers. Both leave us with the responsibility to develop our own stance. In short, and returning to where we started, there is a profound message in the observation that ethics pose a significant challenge, for which we must prepare ourselves.

5.2 Discussion

Two issues have remained unresolved in this chapter. One is the ratio between individual responsibility and institutional responsibility. Where does your personal responsibility end, and where does that of your institution begin? The other is practical. How do you develop this sense of responsibility? We hope to help you answer these two questions in the remaining chapters of this book, when we discuss the most important ethical considerations of research practices step by step.