Introduction

Scientific research is widely recognized as benefitting society in many ways. Research generates knowledge that advances our understanding of the world and may lead to practical applications in technology, industry, medicine, engineering, criminal justice, the military, and public policy [16]. The practical applications of science can also produce economic growth and prosperity [16, 50]. Political leaders allocate billions of dollars of public funds per year to scientific research under the assumption that this expenditure will benefit society [33]. Even though most people have come to expect that scientific knowledge has the potential to benefit society, few would regard this as an ethical requirement for conducting any type of research. For example, if a philanthropist invests millions of dollars in a research project designed to satisfy his private curiosity but not likely to yield significant benefits for society, few people would regard this as unethical, provided that it is not fraudulent or harmful. The research might be wasteful or silly, but not unethical [40].

The moral landscape regarding the social benefits of research is very different when it involves human participants, however. A recent dispute about the ethics of privately-funded pesticide experiments involving human participants illustrates the concerns people sometimes raise about the social benefits of research (or lack thereof). In the late 1990s, chemical companies conducted studies in the US and UK in which healthy human participants ingested small quantities of pesticides. The studies measured levels of these chemicals in the blood and urine and collected data pertaining to pharmacokinetic and toxic effects. The purpose of these experiments was to produce evidence to convince the Environmental Protection Agency (EPA) to increase the allowable levels of pesticide residues on foods. The EPA had been relying on data from animal experiments to make these determinations [33]. Several commentators and environmental groups asserted that these studies were unethical because they were designed to yield knowledge that would benefit the companies but not society [26]. Others argued that evidence from well-designed pesticide experiments on human subjects (not necessarily the disputed ones) could benefit society by giving the EPA information useful for making regulatory decisions concerning pesticides [34].

Some commentators have criticized post-marketing drug trials on the grounds that they are little more than promotional devices designed to benefit companies which do not generate socially valuable knowledge [43]. A post-marketing drug trial is a study conducted after a regulatory agency, such as the Food and Drug Administration, has approved the product for marketing. Post-marketing studies attempt to gather additional information concerning the drug’s safety and efficacy in various populations and often may the aid of hundreds of physicians in data collection. Critics have argued that companies often use these studies to encourage physicians to prescribe their new medications, and that these studies do not produce socially valuable knowledge [43]. Others have argued that properly-designed post-marketing studies can yield knowledge that promotes safe and effective use of prescription drugs [22].

In this paper, I will critically examine the idea that research involving human subjects should benefit society, also known as the social benefits principle [6, 11, 18, 25, 48]. I will argue that while the expectation of public benefit is an important criterion for evaluating research with human subjects, it is not a necessary condition for regarding it as ethical, unless the research uses public resources or imposes more than minimal risks on non-consenting subjects. Privately funded research conducted in private settings with consenting subjects may be regarded as ethical, even if it is not likely to benefit the public, as long as it meets other widely accepted criteria, such as rigorous scientific design, risk minimization, equitable subject selection, and protection of confidentiality/privacy.

The Social Benefits Principle

A version of the social benefits principle (SBP) first appeared in the Nuremberg Code, which was developed by judges at the Nuremberg War Tribunals to serve as a source of international law for prosecuting Nazi doctors and scientists accused of heinous war crimes against concentration camp prisoners [27, 41]. The second principle of the Code states that “The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature [27 at 1].” The Nuremberg judges formulated this principle in reaction to Nazi experiments with questionable social value, such as Josef Mengele’s notorious experiments on twins [1].

Other ethics guidelines state or imply that research with human subjects should benefit society. For example, The Belmont Report, which provided a conceptual foundation for a major revision of the U.S. federal regulations for research with human subjects in 1981, states that research should “maximize possible benefits and minimize possible harms,” where benefits and harms may “affect the individual subjects, the families of the individual subjects, and society at large [25 at 5, 7].” Although the federal regulations do not mention social benefits per se, they state that an institutional review board (IRB) can approve research only if it determines that “risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result [4 at 46.111a2].” The Helsinki Declaration and the Council for Organizations of Medical Sciences (CIOMS) guidelines also state that the risks of research must be justified by the importance of the knowledge expected to be gained [3, 49]. Most commentators have understood the “importance of the knowledge expected to be gained” in terms of its potential impact on society [6, 18, 25].

Although the idea that research with human participants should benefit society has become firmly entrenched in various regulations, policies, and guidelines, the bioethics literature includes little in-depth analysis of the SBP or its ethical justification. London observed that disputes about the social benefits of research can be difficult to resolve because people have conflicting conceptions of the common good, but he did not examine the philosophical foundations for the social benefits principle [19, 20]. Several years ago I published an article in which I proposed a method that IRBs can use to assess social benefits systematically, but I did not examine the philosophical justification for social benefits principle [31]. Rid and Wendler proposed a framework for assessing risks and benefits that addresses benefits to society, but they also did not examine the justification of a social benefits requirement in depth [35]. Habets and coauthors argued that expected social benefits are a necessary condition for conducting Phase I studies in which healthy volunteers will receive no direct, medical benefits but they did not examine the rationale for a social benefits principle beyond Phase I studies [11]. A notable exception to this trend is Wertheimer, who recently examined the philosophical justifications of the idea that research with human participants must have social value to be ethical [48]. I will refer to some of Wertheimer’s critiques of the SBP in this paper.

What are Social Benefits?

So what, exactly, is the social benefits principle? Before answering this question it will be useful to define “social benefit.” A benefit is something that is regarded as good or valuable. Some benefits, such as happiness, pleasure, or well-being, are viewed as valuable for their own sake (i.e. they are viewed as inherently valuable); while others, such as money or wealth, are viewed as valuable because they enable use us to obtain other goods (i.e. they are viewed as instrumentally valuable). Some benefits may be regarded as inherently and instrumentally valuable. For example, one might value health for its own sake and because it enhances one’s ability to obtain employment, education, opportunities, and other things that one values [10, 28, 30, 37, 38]. One might value knowledge for its own sake and because it has practical applications in fields such as medicine, engineering, and public health.

A social benefit is something that benefits society. If we understand a society to be a group of individuals living together under a common political organization, then benefits that accrue to individuals also benefit society, because the social good is nothing more than the aggregation of individual goods. Benefits to private corporations also benefit society because corporations are composed of individuals. For example, if I personally benefit from a job as construction worker and the construction company benefits from construction contract, then society also benefits.

Most people probably do not view these aforementioned benefits as “social” benefits. If you ask someone to name some social benefits they will probably mention things that benefit many people in society, such as bridges or roads, vaccination programs, the police, economic development, etc. Critics of the privately funded human studies mentioned above probably did not consider corporate profits to be a social benefit of the research; they thought the studies should have produced something that could have benefitted other people in society, such as scientific knowledge that has useful applications in medicine or public health [6, 7].

To make sense of this common understanding of “social” benefit it will be useful to distinguish between internal (or private) and external (or public) benefits [37, 38]. For example, if the government hires a private company to build a bridge across a river, the employees and the company benefit but so do people who use the bridge as well as the local community, which may benefit from the economic activity generated by the bridge. The benefits to the employees and the company from the bridge construction are internal benefits, and the benefits to other people and the local community are external ones [27]. For the purposes of this paper, I will understand “social benefits” to mean “public benefits.”

To illustrate this interpretation social benefit, consider two studies. Study A is a survey conducted by a company to gather information pertaining to consumer opinions and preferences related to products manufactured by the company as well as products manufactured by competitors. Participants will receive $25 for completing the survey. If this survey is likely to produce significant benefits for the company, the participants, and its employees but no benefits to other people, we could say that the research is not likely to have social (i.e. public) benefits. Study B is a clinical trial sponsored by a private company to test a vaccine for the Zika virus. The trial is likely to benefit the company and its employees as well as other people by producing knowledge essential to the development of a vaccine that prevents infection with the virus. We could say that study B is likely to have social benefits.

Now that we have defined social benefits, we can distinguish between different versions of the SBP. I will define a strong version of the principle as follows:

(SBP/strong): We should regard research with human subjects as ethical (or moral) only if we reasonably expect that it will produce results that substantially benefit the public.

By “reasonably expect” I mean a judgment that an event or set of circumstances is likely to occur: a reasonable expectation is therefore a probability judgment. We could therefore distinguish between stronger and weaker versions of strong version of the principle by incorporating different probability values into it. For example, a very strong version would have a probability (p) > 0.95; weaker versions would be p > 0.75, p > 0.50, p > 0.25, etc.

I include the qualifier “substantial” in the definition of the strong version of the principle because those who object to the studies like the ones mentioned above probably would still view research with human subjects as unethical if it produces results with only marginal public benefits. I will leave “substantial” undefined, since I will assume that those who are debating about the social benefits of research are disputing about whether potential benefits are substantial, and they have some notion of “substantial” in mind.

It is important to note that benefits to research subjects do not count as public benefits, since these benefits are internal to the research. For example, if a subject earns money for his or her participation or receives medical treatment, I would not consider this to be a public benefit. Benefits to subjects, investigators, sponsors, and institutions involved in the research are private benefits.

It is also worth noting that this principle makes no mention of the distribution of benefits in society. Benefits of research might accrue to people who have a particular disease, members of a community, and so on. I will set aside distributional issues for now and return to them in the discussion of exploitation below.

According to SBP/strong, an IRB should not approve a study unless it reasonably expects the study to substantially benefit the public. As one can see, this is a very demanding principle since it would imply that we should regard a private company’s marketing research as unethical if it is expected to benefit the company and its employees but not the public.

I will also consider a version of the SBP that does not treat expected public benefit as a necessary condition for judging research to be ethical. According to a weak version of the principle:

SBP/weak: The reasonable expectation of substantial public benefit is one among several criteria that we should use to determine whether research with human subjects is ethical (or moral), but it is not a necessary condition for judging research to be ethical.

In addition to expectation of substantial public benefit, other well-established criteria for determining whether research with human subjects is ethical include: rigorous scientific design, informed consent, risk minimization, benefits to human subjects, privacy and confidentiality protection, and equitable subject selection [4, 9, 10]. According to the weak version of the SBP, an IRB should consider the expected public benefits of a proposed study when evaluating it, but could approve a study even if it does not expect it to yield substantial public benefits as long as the study meets other ethical criteria. The weak version of SBP, like the strong one, also does not mention how public benefits are distributed.

According to the SBP/weak, expected public benefit is a desirable moral characteristic of research with human subjects but a study could be ethical even if it lacks this quality. For an analogy, one may consider a variety of factors when purchasing an automobile, such as price, fuel economy, reliability, etc. Each of these factors could impact the value of the car to you, though none would be a necessary condition for judging it to be worthy of purchasing. You could decide to buy a car with poor fuel economy because it is reliable and has a reasonable price.

To understand the difference between strong and weak versions of the principle, consider a study in which investigators plan to randomly assign non-pregnant healthy volunteers to receive an experimental vaccine for the Zika virus or a placebo. All volunteers will be exposed to mosquitos infected with the virus. The study will impose significant risks on the subjects, such as the risks of contracting the virus, and the risks of exposure to the vaccine. Subjects will be enrolled only if they give their informed consent and have a negative pregnancy test (if female). They will be paid $1000 for their participation. Subject privacy and confidentiality will be protected. The investigators will monitor subjects for adverse health effects and provide them with supportive care if they become infected. The Zika virus poses a serious risk to fetuses (a condition known as microcephaly) but usually only produces a mild infection in healthy adults (i.e. fever, rash, joint pain) and resolves within a week [2]. The investigators will advise non-pregnant females in the study to avoid becoming pregnant for 60 days after their participation in the study ends, to prevent any potential harm to fetuses. According to the SBP/strong, we should judge this study to be ethical only if we reasonably expect it to produce substantial public benefits. According to the SBP/weak, we could judge the study to be ethical even if we do reasonably expect it to produce substantial public benefits, provided that we determine that it meets other ethical criteria.

Justifying the Social Benefits Principle

Having distinguished between a strong and weak version of the social benefits principle, I will now consider different arguments for it, and whether any of these justify stronger or weaker versions of the SBP (see Table 1).

Table 1 Justifications for the social benefits principle

Risk Imposition

Perhaps the most frequently cited rationale for the SBP is that research must have substantial public benefits to justify imposing risks on human participants [6, 10]. As noted above, the Helsinki Declaration and CIOMS guidelines imply that the risks of research are justified, in part, by their expected public benefits. While the risk imposition argument appears to be a sound rationale for holding that research with human subjects generally ought to have substantial public benefits, it does not support a strong version of the SBP. Most ethical theories hold that it is permissible for competent, consenting adults to participate in risky activities that do not produce public benefits, provided that the activities do not place others at significant risk of harm [8, 9]. We allow competent, consenting adults to skydive, climb mountains, use tobacco and alcohol, shoot firearms, and engage in other risky activities that do not benefit the public. Outside of the research context, a person’s consent to an activity, not the public benefits of the activity, would seem to be the main consideration involved in the ethical evaluation of risk-taking behavior that does not impose significant risks on others. Placing restrictions on consensual risk-taking in research is paternalistic [24, 33]. If we find such restrictions objectionable outside of the research context, why should we endorse them in research involving human participants?

The risk imposition argument makes more sense when research participants cannot provide consent. Most ethical theories would hold that we have an obligation not to harm others or place them at unreasonable risk of harm [8, 9]. If research participants are incapable of providing consent due to age, mental disability or some other factor, then their participation in research becomes ethically problematic, because they would be subjected to risks that they have not consented to. In these situations, a legally authorized representative (LAR), such as a parent, guardian, health care agent, or close family member, may consent for the participant. The participant may be asked to assent to participation in the research, if he or she is capable [4, 16]. LARs have moral—and legal—obligations to make decisions that promote the best interests of the individuals they are consenting for and to not expose them to unreasonable risks [16].

Most research regulations and guidelines hold that participants who cannot provide consent can participate in three types of research: (1) research that poses minimal risks; (2) research that poses more than minimal risks but has the potential to benefit the participants and the public (e.g. clinical trials), and (3) research that poses more than minimal risks and offers the participants no significant benefits but has the potential to yield knowledge that offers substantial benefits to society [4, 18, 44, 45, 40]. The first two types of research have been less ethically controversial than the third because the first type does not impose significant risks on participantsFootnote 1 and the second offers participants potential benefits that offset the risks they face, i.e. it is in their best interests [36, 46].

The third type of research has been more controversial because it involves imposing significant risks on individuals without their consent and without the expectation of any compensating benefits [13, 36, 44, 45]. Some have argued that risk exposure in these types of studies can be justified in order to benefit other individuals who have similar medical conditions [13, 17]. Others have argued that all people, including those who cannot provide consent, have a moral obligation to contribute to society and that they can honor this obligation by participating in research. If risky pediatric research offers important benefits to other children, for example, then parents can consent for their children to allow them to fulfill this social obligation [23]. Though some (e.g. Ramsey [29]) have rejected both of these arguments, most commentators agree that the social value of the research is a valid reason for imposing more than minimal risks on non-consenting research participants [17, 23, 36, 45]. Assuming that this view is correct, we can say that the risk imposition argument justifies a strong version of the SBP when the risks of research are more than minimal and the participants are non-consenting, but not when the research is minimal risk or the subjects are consenting.

Beneficence

Another frequently cited rationale for the social benefits principle is the argument from beneficence [6, 15, 25]. Various ethical theories, including Kantianism, utilitarianism, Christian ethics, and virtue ethics, support a moral duty to benefit others. For example, if you see someone drowning in a pool, then you should do something to help them, such as throwing them a life-preserver or calling a lifeguard. According to the beneficence argument, investigators, sponsors, and institutions should fulfill their duties of beneficence by conducting research with human subjects that is likely to substantially benefit the public. IRBs should also consider the public benefits of proposed studies when reviewing them [10].

While the beneficence rationale has some intuitive appeal, one might argue that the obligation to help others is not compelling enough to support a strong social benefits principle in research. Most ethicists recognize that the duty to help others is not an absolute moral obligation [26]. Kant’s distinction between perfect and imperfect duties captures this idea [14]. A perfect duty is one that we should always obey while an imperfect duty is one that one may refrain from obeying if it conflicts with other duties. Kant held that we have an imperfect duty to help others. For example, suppose that you have made a promise to meet a friend for dinner and encounter a stranger who wants a ride to work while you are driving to meet your friend. If you give the stranger a ride, you will break your promise. A Kantian could argue that you have a duty to keep your promise which overrides your duty to help the stranger. Kant also held that we have an imperfect duty to develop our own talents and abilities [14]. In some cases, we might decide to refrain from helping someone else to develop our own talents and abilities. For example, I could use extra money from my paycheck to attend an educational seminar instead of giving it to charity.

The idea of supererogatory conduct can also help us think about the limits of the duty of beneficence. Supererogatory conduct involves doing more than what is morally required, or going above and beyond the call of duty [10, 15]. For example, suppose that a charitable organization asks you for $100 to help feed starving children in Africa. You could give the organization the money, but if you do, you may not have enough left from your paycheck to buy your own food. Most theorists and laypeople would say that giving the charitable organization $100 would be a good thing to do, but it would not be morally obligatory. You could give the organization less money or no money at all without acting unethically. Beneficence only requires you to strive to help others, not to place your own welfare at risk for others.Footnote 2 Beneficent actions are often morally supererogatory but not morally required [14, 41].Footnote 3

Applying these insights to the public benefits of research, one could argue that while it is morally desirable to plan and design research with human subjects so that it is likely to benefit the public substantially, the expectation of public benefit is not a necessary condition for regarding as study as ethical, because one make take other factors into account. For example, an IRB might determine that a post-marketing drug study which is not expected to provide the public with substantial benefits is still ethical because it is well designed, likely to benefit the subjects, investigators, and sponsors, and so on. Thus, the duty of beneficence supports a weak version of the SBP but not a strong one.

Prudent Use of Public Resources

Another potential justification for a social benefits principle follows from the obligation to make prudent use of public resources [6, 10, 1820]. Scientists often receive public resources to conduct research, including funding for projects, support for personnel, and the use of laboratories, buildings, materials and equipment paid for with public money. One could argue, therefore, that scientists have an obligation to make prudent use of the resources they receive from the public and that this obligation implies a duty to conduct research that is expected to provide substantial benefits to the public. The obligation to conduct research that is expected to provide substantial benefits to the public applies to all scientists who receive public resources, not just to those who conduct research with human subjects [4]. Thus, publicly-supported scientists should consider the potential public benefits of their work when they plan and design research, and IRBs at public institutions should consider public benefits when they review research proposals.

The prudent use of public resources argument is a compelling rationale for a strong principle of social benefit, but it only applies to government-funded research or research that uses public facilities, equipment, materials, or personnel; it does not apply to privately-funded research conducted in private settings [48]. Government agencies that fund research with human subjects have an obligation to ensure that the projects they support are likely to substantially benefit the public. Likewise, state-supported, public universities have a similar obligation. However, private companies and private universities have no obligation to make prudent use of public resources when conducting research using their own funds, staff, or facilities, because the resources they are using are private, not public.

Reciprocity

Although the prudent use of public resources argument only applies to government-supported scientists, a spin-off of this argument arguably applies to all scientists, including privately-supported ones. One could argue that private companies and universities have an obligation to conduct research that substantially benefits the public even when they use their own facilities, personnel, equipment, materials, or funding, because they have benefitted from public investments in the research enterprise [32, 40]. Most scientists and research staff who work for private companies or universities have attended public universities or have conducted research supported by public funds at some point in their careers. They also take advantage of published data, results, materials, and other resources generated by publicly-funded research. Thus, all scientists and research institutions have an obligation to conduct research that benefits the society so they can reciprocate the public for its investments in science [32, 40].

While the reciprocity argument provides a sound rationale for publicly-supported or privately-supported scientists to conduct research with human subjects that substantially benefits the public, it does not support a strong version of the SBP, because scientists and institutions have numerous options for compensating the public for its investments in the science, and the argument does not specify the form that reciprocity should take. Scientists could benefit the public by teaching and mentoring students, giving lectures to the community, serving on government advisory panels, participating in policy debates, or conducting research that does not use human subjects [40]. Institutions could benefit the public by supporting these activities. Thus, while the reciprocity argument implies that it would be ethically desirable for scientists who conduct research with human subjects to ensure that their work is likely to substantially benefit the public, it does not imply that the expectation of substantial public benefit should be a necessary condition for regarding such research as ethical.

Avoiding Exploitation of Communities or Nations

Exploitation involves taking unfair advantage of a person or group of people in a transaction or relationship [47]. Exploitation may occur even when both parties consent if one party derives an unfair share of the benefits of the relationship or transaction. Numerous commentators have argued that some clinical studies conducted in developing nations since the 1990s have exploited host communities or countries because they benefitted the sponsors and institutions but not the hosts. For example, critics have claimed that pharmaceutical companies have profited unfairly from clinical trials of drugs which they conducted in developing nations because these drugs were not available to people living in those nations after the trials were completed, due to high costs or marketing issues [12, 39]. Critics have charged that these drug trials benefitted the companies but not the local communities or nations. According to the exploitation avoidance argument, research sponsors or institutions should offer substantial benefits to host communities or nations when planning, designing, or implementing research [12, 39] IRBs should not approve research conducted in these settings if it does not offer substantial benefits to host countries or communities [5].

To evaluate the exploitation avoidance argument, we should observe that the SBP, as it has been traditionally understood, does not identify the society (or public) which should benefit from research. Accordingly, my definitions of the SBP (strong and weak) do not specify which public should benefit from research. My definitions of the SBP only state that research should be expected to benefit the public; they do not say how benefits should be distributed. Thus, a clinical trial which offers substantial benefits to members of the public living in the country that sponsored the study, but not to residents of the country that hosted the study, would not violate the SBP; nor would a study that benefits people living in one part of the same country but not another. Thus, the exploitation avoidance argument is actually an application of the SBP that addresses issues concerning the distribution of benefits among private and public actors [5]. As such, it raises complex issues concerning international and intra-national justice which are beyond the scope of this paper [12, 45, 46]. However, assuming, for the sake of argument, that researchers, sponsors, and institutions have an obligation to avoid exploiting host communities or nations, they can meet this obligation without conducting research that is expected to yield knowledge that benefits the hosts. For example, a sponsor could avoid exploiting a host community by providing free medical care to community members or funding a new elementary school, health clinic, or hospital [5]. Thus, the exploitation avoidance argument does not support a strong version of the SBP. It does, however, support a weak version of the SBP, since it implies that investigators, sponsors, and institutions should take steps, such as conducting research that benefits the host nation or community, to avoid exploiting the host nation or community.

Promoting Public Trust

London [20, 21] and Rid and Wendler [35] have argued that research with human subjects should benefit the public in order to promote the public’s trust in investigators, institutions, sponsors, and the scientific enterprise. The public expects research with human subjects to substantially benefit society, and studies which do not benefit the public violate this trust. Public trust is important for recruiting human subjects and obtaining the public’s financial support [32]. Wertheimer critiques the public trust argument by claiming that many subjects are motivated to participate in studies to benefit themselves, not the public. For example, subjects may participate in studies in order to obtain medical treatment, access to experimental medications, or money [48]. While Wertheimer makes a valid point, many subjects also participate in research for altruistic reasons and they expect studies to benefit society. Moreover, Wertheimer’s point does not address the key assumption in the public trust argument that the public expects research to benefit society when it provides financial support for studies involving human subjects.

While the public trust argument is a compelling rationale for ensuring that research with human subjects is expected to yield substantial public benefits, it does not justify a strong version of the SBP, since the public’s trust does not depend on the design or outcome of any particular study. Lack of public benefit from a single study involving human subjects is not likely to have a noticeable impact on the public’s trust in science. Thus, the public trust argument does not imply that the reasonable expectation of substantial public benefit is necessary condition for regarding research with human subjects as ethical. However, the public’s trust may be significantly eroded if most studies involving human subjects do not substantially benefit the public. Thus, the public trust argument does support a weak version of the SBP, since it implies that, in general, studies involving human subjects should be expected to substantially benefit the public.

Conclusion

In the article I have examined the ethical foundations of the idea that research involving human subjects should benefit society (or the public). To clarify this ethical requirement, I have distinguished between a strong and a weak version of the social benefits principle (SBP) for research involving human subjects. According to the strong version of the principle, we should regard research with human subjects as ethical only if we reasonably expect that it will produce results that substantially benefit the public. According to the weak version, the reasonable expectation of substantial public benefit is one among several criteria that we should use to determine whether research with human subjects is ethical, but it is not a necessary condition for regarding research as ethical. I have also considered six different rationales for the SBP, including arguments which appeal to: risk-imposition, beneficence, prudent use of public resources, reciprocity, exploitation avoidance, and public trust. I have argued that the risk-imposition, beneficence, reciprocity, exploitation avoidance, and public trust rationales support a weak version of the SBP and that only risk-imposition and prudent use of public resources support a strong version. The reasonable expectation of substantial public benefit is a necessary condition for regarding research with human subjects as ethical only when (a) a study imposes more than minimal risks on non-consenting subjects; or (b) a study is supported by public resources.

None of the foregoing implies that the assessment of social benefit has no place in the ethical evaluation of research with human participants, since several arguments support the application of a weak version of the SBP to all human studies. Investigators, sponsors, and institutions should always address expected public benefits when planning or designing research, and IRBs should always consider these benefits when evaluating proposed studies. However, the reasonable expectation of substantial public benefit is not always a necessary condition for viewing research with human subjects as ethical. Research conducted in private settings with private resources and adult subjects could be viewed as ethical even if it is not expected to substantially benefit the public, provided that it meets other ethical criteria, such as rigorous scientific design, risk minimization, informed consent, equitable subject selection, and protection of confidentiality/privacy.