Medicine, Health Care and Philosophy

, Volume 18, Issue 1, pp 23–32 | Cite as

Unequal treatment of human research subjects

Scientific Contribution

Abstract

Unequal treatment of human research subjects is a significant ethical concern, because justice in research involving human subjects requires equal protection of rights and equal protection from harm and exploitation. Disputes sometimes arise concerning the issue of unequal treatment of research subjects. Allegedly unequal treatment occurs when subjects are treated differently and there is a genuine dispute concerning the appropriateness of equal treatment. Patently unequal treatment occurs when subjects are treated differently and there is not a genuine dispute about the appropriateness of equal treatment. Allegedly unequal treatment will probably always occur in research with human subjects due to disagreements about fundamental questions of justice. The best way to deal with allegedly unequal treatment is to promote honest and open discussions of the issues at stake. Research regulations can help to minimize patently unequal treatment by providing rules for investigators, ethical review boards, institutions, and sponsors to follow. However, patently unequal treatment may still occur because the regulations are subject to interpretation. Federal agencies have provided interpretive guidance that can help promote consistent review and oversight of human subjects research. Additional direction may be needed on topics that are not adequately covered by current guidance or regulations. International guidelines can help promote equal treatment of human subjects around the globe. While minor variations in the treatment of research subjects should be tolerated and even welcomed, major ones (i.e. those that significantly impact human rights or welfare) should be avoided or minimized.

Keywords

Human subjects research Justice Equal treatment Ethics Regulation 

Introduction

Allegations of unequal treatment of human subjects have taken center stage in some recent ethical controversies. For example, Lurie and Wolfe (1997) and Angell (1997) argued that perinatal (mother–child) human immunodeficiency virus (HIV) prevention trials conducted in developing countries that included placebo control groups were unethical because they amounted to a double-standard: a higher standard for developed nations and a lower one for developing nations. A treatment regimen known as the 076 protocol had already been proven effective in preventing perinatal HIV transmission in trials that had been conducted in developed nations, and it had become the standard of care there. The 076 protocol involves administration of anti-retroviral medications to the mother during pregnancy and labor, and after birth to the mother and child. Critics claimed that the developing world trials should have compared different dosages of HIV medications, instead including a placebo control group, since the 076 protocol had already been proven to be effective at preventing perinatal HIV transmission, and it would be unethical for physician/researchers to deny patients/subjects effective treatment, since they have a duty to benefit their patients.

Defenders of the disputed trials argued that the social and economic conditions of host countries were morally relevant considerations that justified inclusion of placebo control groups (Varmus and Satcher 1997; Resnik 1998). The host nations could not afford to treat patients with about $800 worth of medications used in the 076 protocol, nor did they have the kind of health care infrastructure necessary to implement it. The goal of the disputed trials was to determine whether a simpler form of treatment, using about $80 worth of medications, would be effective at preventing perinatal HIV transmission. To perform a rigorous and timely test of this hypothesis, according to the proponents, it was necessary to determine whether this simpler and cheaper form of treatment would be more effective than receiving no treatment (or a placebo), which was the de facto standard of care in host nations, since most patients did not have access to HIV medications. A study that compared different doses of HIV medications would be more difficult to analyze and interpret and would take longer to perform than one with placebo control group. Defenders of the perinatal HIV prevention trials argued that subjects in developing nations were not being treated according to a lower ethical standard, but according to one that was appropriate, given their circumstances and the need to develop an affordable therapy in a timely fashion (Varmus and Satcher 1997; Resnik 1998).

Disputes concerning unequal treatment have also occurred in pediatric research. Federal research regulations allow an institutional review board (IRB) to approve pediatric studies that offer no direct benefits (such as medical treatment) to the subjects if the IRB determines (a) that the research involves only a minimal risk to subjects (45 CFR 46.404) or (b) that the research involves a minor increase over minimal risk but is likely to yield important knowledge about the child’s disorder or condition (45 CFR 46.406) (Department of Health and Human Services 2009).1 The regulations define minimal risk as “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests (45 CFR 46.102i).” There is an ongoing dispute about whether the risks of “daily life” should be interpreted in absolute terms (i.e. the risks typically encountered by normal, healthy children in their daily lives) or in relative terms (i.e. the risks typically encountered by children recruited in a particular study). Critics of the relative standard have argued that is it unjust because it would permit some pediatric populations to be exposed to higher risks than others, since they face higher daily life risks (Kopelman 2000). For example, children living in a dangerous inner city neighborhood might be permitted to be exposed to higher risks in research studies than children living in the suburbs. Thus, a relative standard would lead to unequal treatment of research subjects (Resnik 2005). Proponents of a relative standard of minimal risk in pediatric research have argued that it can be justified in some circumstances in order to conduct research, such as an HIV prevention trial in adolescents, which offers important benefits to the population (Snyder et al. 2011).

What does it mean to treat human research subjects unequally? Is unequal treatment of human subjects morally wrong? Why does unequal treatment occur and how can it be prevented? This article will investigate these and other issues related to unequal treatment of human research subjects and offer some recommendations for investigators and oversight committees and agencies.

Justice and equal treatment

There is a trivial sense in which research subjects are almost always treated unequally. For example, if one subject chooses to have blood drawn from her right arm and the other chooses the left, this would be unequal treatment because different arms are used. But this type of unequal treatment makes little difference morally. In this article I will focus on unequal treatment that makes a moral difference, such as unequal treatment that impacts rights or welfare.

To understand why unequal treatment of human research subjects might be considered morally wrong, it will be useful to make a few brief remarks about justice.2 Questions of justice arise when one reflects on the principles, norms, and procedures that ought to govern human social interactions and institutions (Rawls 1971). Justice encompasses five types of problems: distributive justice addresses the distribution of socioeconomic goods, such as wealth, income, education, and health care; retributive justice addresses the imposition of punishments on people convicted of crimes; compensatory justice addresses the compensation of people for injustices that have occurred to them; international justice addresses relationships among different nations; and procedural justice considers the fairness of rules and procedures pertaining to distributive, retributive, compensatory, and international justice.

There are many different theories of justice. Most of these include a philosophical defense of one or more principles that can be applied to situations involving matters of justice. Some of the most influential theories include: utilitarianism, which holds that principles of justice should promote the overall good of society (Mill 2003; Brandt 1992); egalitarianism, which holds that principles of justice should promote equality3 (Rawls 1971); and libertarianism, which holds that principles of justice should protect individual rights (Nozick 1975).

Rawls’ (1971) theory of justice is a highly influential view that many contemporary theorists take as a starting point for their discussions of justice. Rawls’ theory includes two principles: the equality principle, which states that all members of society should have equal basic liberties, such as freedom of speech, religion, and association; and the difference principle, which states that socioeconomic inequalities are acceptable only if they benefit the worst-off members of society and are compatible with equality of opportunity. Rawls defends these two principles by arguing that they would be accepted by rational agents in a hypothetical situation, known as the original position, in which the agents are behind a veil of ignorance that prevents them from knowing their place in society.

Most theories of justice also include the formal principle of justice, which holds that cases that are equal in relevant respects ought to be treated equally (Carr 1981). However, treating equals equally is an empty tautology that has no practical import unless one specifies what it meant by “relevant respects,” and material principles are needed to interpret this phrase. For example, if one accepts the material principle that compensation in employment should be based on one’s work performance, then the formal principle of justice implies that people who perform equally at the same type of job should receive equal pay. The formal principle of justice has its origins in the writings of Aristotle (2003), and has been discussed by many contemporary writers, such as Rawls (1971), Carr (1981), Dworkin (2000), and Gosepath (2007). It is referred to as a formal principle because it is a logical requirement that ethical decisions be consistent (Berlin 1955/1956; Carr 1981).

Consistency is important for two reasons. First, one might value consistency for its own sake. According to many philosophers, being consistent is part of what it means be rational, and principles of justice should be based on rationality (Rawls 1971).4 Second, one might value consistency as a means of promoting material principles of justice. For example, compensating people differently who perform the same job undermines that principle of equal pay for equal work. Furthermore, inconsistency can erode public support for the system of justice, since people may strongly object to decisions that they view as arbitrary (Martinson et al. 2006).

Given this rough sketch of justice, there are at least two reasons for thinking that research subjects should be treated equally. First, the formal principle of justice requires equal treatment of human subjects that are equal in relevant respects. However, as noted above, while this consistency condition is important for promoting justice, it provides no practical guidance and needs to be supplemented by material principles.

A second reason for treating research subjects equally, therefore, is that equal treatment is required by material principles of justice. For example, Rawls’ equality principle requires equal protection of liberty, libertarianism requires equal protection of rights, and utilitarianism requires equal consideration of the welfare of all members of society. Since a defense of a full-blown theory of justice is beyond the scope of this paper, I will develop my analysis by appealing to two commonly accepted principles for ethical research with human subjects, which are embodied in research regulations (Department of Health and Human Services 2009) and guidelines (World Medical Association 2013), and discussed in scholarly articles (Emanuel et al. 2000) and books (Levine 1988). The two principles are: equal protection of rights and equal protection from harm or exploitation. Equal protection of rights implies rules for obtaining informed consent from participants or their representatives and protecting privacy and confidentiality. Equal protection from harm or exploitation includes implies rules for minimizing risks to subjects and selecting subjects equitably. For example, providing some important information concerning risks to some subjects in a clinical trial but not to all of them would be wrong because it would involve unequal protection of rights during the consent process. Exposing some subjects in the same study to more risks than others would be wrong because it would involve unequal protection from harm or exploitation. As we shall see below, these principles are rough generalizations that may gloss over underlying disagreements about fundamental principles. Nevertheless, they play an important role in the ethics and oversight of research involving human subjects (Emanuel et al. 2000).

The authors of the Belmont Report, an influential document concerning research ethics that provided a conceptual basis for a major revision of the US federal research regulations in 1981, gave an account of justice in research that includes a rationale for equal treatment of research subjects similar to the one discussed above. The authors describe three principles for research with human subjects: respect for persons (which implies rules concerning informed consent) beneficence (which implies rules concerning risk minimization), and justice. They recognize that justice requires that equals be treated equally, but they also acknowledge that this formal principle needs to be supplemented by material principles that can distinguish between cases (National Commission 1979).

Disputes about unequal treatment

While there is a solid basis in moral theory and research ethics guidelines and regulations for regarding unequal treatment of human subjects as wrong, controversies concerning unequal treatment of human subjects continue to arise (Rhodes 2010). In thinking about these disputes, it is useful to distinguish between allegedly unequal treatment and patently unequal treatment. Allegedly unequal treatment occurs when subjects are treated differently and there is a genuine dispute about the appropriateness of equal treatment. A dispute is genuine if there is disagreement among well-informed parties who offer reasonable arguments for their positions (Rawls 1993). A dispute may be genuine even when one side is in the minority, provided that the minority position is supported by reasonable arguments. The controversy concerning the perinatal HIV prevention trials in developing nations involved allegedly unequal treatment because the parties had a genuine dispute about appropriate treatment. Patently unequal treatment occurs when subjects are treated differently and there is not a genuine dispute concerning the appropriateness of equal treatment. For example, if there is widespread agreement that clinical trials should have data and safety monitoring boards (DSMBs) to protect subjects from harm, and one clinical trial has a DSMB but another does not, this would be patently unequal treatment.

The sources of disputes concerning unequal treatment are threefold. First, people often disagree about the facts. For example, if one researcher claims that subjects are being treated unequally because some are not capable of understanding language in the consent form, but other researchers deny this allegation, then this would be a factual dispute. Second, people sometimes agree on the facts but disagree about the principles (or other rules) that apply to the situation. For example, someone who takes a libertarian approach to justice may not view a research study conducted in a developing nation as exploitative because the subjects have consented to the study and the community has been consulted, whereas someone who takes an egalitarian approach may view it as exploitative because the study does not provide the population with a fair share of the benefits of research (Ballantyne 2010). Third, people often agree about the facts and the principles or rules but disagree about how they should be interpreted in a particular situation. For example, if two researchers both agree that subjects should be informed about significant risks but one researcher considers severe allergic reaction not to be a significant risk because it has a very low probability (P = 0.0001) of occurring, but the other considers it to be significant risk, then they would be disagreeing about the interpretation of “significant risks.”

To illustrate how disputes about unequal treatment can occur, let’s consider the controversy concerning the HIV prevention trial described at the outset of this article. Critics of this study said it embodied a double-standard, which is another way of saying that treatment was unequal, since two different standards would permit two different forms of treatment. What kind of treatment was allegedly unequal? The critics focused on the fact that the research included a placebo control group, which meant that some subjects in the study did not receive HIV medications. As long as informed consent was obtained in these studies—and it appears that it was—then denying some subjects HIV medications would not be unequal protection of rights. Also, denying medications would not amount to unequal protection from harm, because not offering someone treatment is not a form of harm, since harming involves making someone worse off (Feinberg 1987). However, denying research subjects HIV medications would be a failure to offer them important benefits, which would violate the physician’s obligation to provide effective treatment to his or her patients. Using placebo-control groups in clinical trials is ethical, according to many, only when there is no effective treatment for the disease, or there is an effective treatment but patients can forego treatment without suffering significant harm (World Medical Association 2013). Defenders of the trials claimed that the physician-researchers had no obligation to provide effective treatment, since the treatment was not routinely available in the host nations. Critics rejected this argument on the grounds that the physician-researchers would have been able to obtain access to HIV medications through their funding organizations, and that not offering these medications to some patient-subjects involved taking unfair advantage of the fact that the participants lacked access to medications. The researchers were exploiting the participants in order to develop medications that would benefit the community. Defenders of the trials replied that the participants were not being exploited because they consented to the research and they were protected from harm associated with the research. Moreover, the research would lead to the development of treatments that could benefit members of the population (Resnik 1998).

It appears, therefore, that disagreements about the concept of exploitation contributed to the controversy concerning the HIV perinatal prevention trial in developing nations. Proponents of the trial and their critics subscribed to two different accounts of exploitation. Proponents appear to have assumed a libertarian account of exploitation, which holds that consensual relationships or transactions in which no one is harmed are not exploitative, while critics seemed to have adopted an egalitarian approach, which holds that exploitation can occur in consensual relationships or transactions in which no harm occurs if there is an unfair sharing of benefits. The participants in the placebo groups did not receive a fair share of the benefits of the study, according to critics, because they did not receive effective treatment (Resnik 2003; Jansen and Wall 2013).

Let’s also consider the controversy over interpreting the phrase “minimal risk” in pediatric research. Critics of the relative standard of minimal risk (i.e. minimal risk = daily life risks of the study population) argued that this interpretation did not provide equal protection from risks because some children enrolled in minimal risk or minor increase over minimal risk studies could face higher risks than others, due to their circumstances. Proponents of the relative standard have acknowledged that it may involve unequal protection from risks, but they argued that this can be justified, given the importance of the research for children’s health and the degree of the risk, which they believed to be only moderate (Snyder et al. 2011). This dispute probably does not involve a factual disagreement, since both sides appear to agree that use of the relative standard may involve unequal exposure to risks. This disagreement here probably also involves more fundamental issues concerning justice: critics of the relative standard favored equal protection from risks, while defenders of the relative standard held that equal protection from risks is not as important as promoting the good of the population.

Another controversial issue related to equal/unequal treatment is the exclusion of certain categories of vulnerable subjects from some types of research. Vulnerable subjects include children, mentally disabled adults, prisoners, and other people who have a compromised ability to provide consent or advocate for their own interests (Levine 1988; Bonham and Moreno 2011). The authors of the Belmont Report were especially concerned that vulnerable populations had been enrolled in studies because of their availability or manipulability and not because the research was relevant to their disease or condition (National Commission 1979). They argued that certain classes of vulnerable subjects should be included in research only under certain conditions (National Commission 1979). This finding was codified in the federal regulations as a requirement that the selection of research subjects be fair or equitable and that there be additional protections for vulnerable subjects (Department of Health and Human Services 2009, 45 CFR 46.111a3, 45 CFR 46.111b). The federal regulations also include additional protections for children, neonates, fetuses, and prisoners that limit the level of risks they may be exposed to in research that does not offer them medical benefits (Department of Health and Human Services 2009, 45 CFR 46, Subparts B, C, D). Other guidelines recommend additional protections for mentally disabled adults (National Bioethics Advisory Commission 1998).

Excluding children, pregnant women, mentally disabled adults, or prisoners from some types of research seems to violate the formal principle of justice, because it involves treating people differently. However, this policy can be justified on the assumption that some groups of potential research subjects need more protection, due to their vulnerability. While competent adults should be allowed enroll in risky studies (such as a Phase I drug trials) that offer participants no medical benefits, children should not be allowed to enroll in these studies because children cannot make a fully informed and voluntary choices to participate. Children and adults are different in an important respect, i.e. the ability to provide consent, so they can be treated differently.

Disputes often arise concerning exclusion of certain categories of vulnerable subjects from research. For example, an ongoing issue is whether to include pregnant women in research that poses risks to the fetus, such as drug studies. Those in favor of inclusion argue that it is important to understand how drugs affect pregnancy, since women often take medications during pregnancy, and physicians need accurate and reliable information concerning safety, efficacy, and dosing. Opponents argue that pregnant women should be excluded from drug trials in most cases to protect the fetus from potential harms of exposure to drugs in utero. If the risks of prenatal exposure to a drug are not known, it is better to avoid exposure rather than risk serious birth defects or adverse effects on development (Grady and Denny 2011).

Another perennial issue is whether to include children in risky research that offers them no medical benefits. Consider, for example, a study of diabetes in children that includes healthy children without diabetes as a control group. The healthy controls may be asked to undergo some invasive and potentially risky procedures, such as an insulin resistance test. A standard method of measuring insulin resistance is the glucose clamp technique, which requires subjects to fast overnight and receive intravenous glucose infusion for several hours on the following day (Muniyappa et al. 2008). Common risks include bruising and infection at the infusion site. In rare cases, subjects may experience elevated or depressed blood sugar levels or an allergic reaction to glucose. Elevated blood sugar levels in this study can cause increased urination and thirst, confusion and headaches, while depressed blood sugar levels may lead to seizures, coma, and in rare cases brain damage or death. Some have argued that healthy subjects can be included in research like this diabetes study because it offers important benefits to children, while others have argued that healthy subjects should be excluded in situations like this one because they should be protected from excessive risks (Iltis 2007).

Disputes about excluding certain classes of potential participants from research also probably involve underlying disagreements about principles of justice (Mastroianni and Kahn 2001). Those who favor including subjects in these disputed studies can be viewed as taking a utilitarian approach to justice because they are concerned about promoting the health of the overall population. For example, the population of pregnant women can benefit from including pregnant women in research, and the population of children can benefit from including children in research. Those who oppose including subjects in these disputed studies take an approach to justice, exemplified by the Belmont Report, which emphasizes protecting vulnerable people from harm or exploitation (Goodin 1985; Mastroianni and Kahn 2001). Since these two approaches to justice appear to be at odds, disputes concerning research with vulnerable subjects may continue to occur.

The role of regulations and guidelines

While underlying disagreements concerning principles of justice or their application to particular cases may make it difficult to avoid allegedly unequal treatment of research subjects, regulations and guidelines can help to minimize patently unequal treatment. The federal research regulations5 help to minimize patently unequal treatment of research subjects by providing investigators, IRBs, institutions, and sponsors with a system of rules for the ethical conduct of research. The rules include criteria for approving studies, requirements for obtaining and documenting informed consent, and procedures for reviewing, approving, and overseeing research (Department of Health and Human Services 2009). Patently unequal treatment of human subjects still occurs, however, because the regulations may be interpreted differently by IRB members and investigators, due to their diverse expertise, backgrounds, experience, and values. The Office of Human Subjects Protections (OHRP) and the Food and Drug Administration issue interpretative guidance on certain aspects of the regulations (Office of Human Subjects Protections 2013a; Food and Drug Administration 2012), but this guidance does not eliminate divergent interpretations of the rules.

For example, a survey of US IRB chairpersons conducted by Shah et al. (2004) found that they often have different interpretations of minimal risk in pediatric research. 53 % of respondents said that an electromyogram was minimal risk but 41 % said it was a minor increase over minimal risk. 23 % of respondents classified allergy skin testing as minimal risk, while 43 % said it was a minor increase over minimal risk, and 27 % said it was more than a minor increase over minimal risk. The study suggests that unequal treatment of children involved in research occurs in the US as a result of different interpretations of minimal risk, since some IRBs might approve a risky study that others do not approve (Shah et al. 2004).

Additional evidence for patently unequal treatment of human subjects comes from studies that document inconsistent IRB review of multisite research (Silberman and Kahn 2011). Green et al. (2006) examined how 43 different IRBs evaluated the same protocol. 31 gave the protocol full board review, ten gave it expedited review, one determined that it was exempt from IRB review, and one decided that the study was too risky.6 McWilliams et al. (2003) reported the results of a genetic epidemiology research protocol submitted to 31 IRBs. 15 required the study to have more than one type of consent document and ten did not require a child’s assent. Mansbach et al. (2007) looked at how 34 IRBs evaluated a pediatric research protocol. 13 approved it with no changes, 18 approved it with minor revisions, and three deferred the approval, pending substantive revisions. Stark et al. (2010) examined how 18 IRBs reviewed a study of vitamin A supplementation in low birth-weight infants. Their study found that there was considerable variability in IRB review, due to difficulties with assessing the appropriateness of the study design (Stark et al. 2010). Inconsistent IRB review of multisite research may lead to patently unequal treatment of human subjects when differences in how IRBs assess risks and benefits or regulate the consent process affect the rights or welfare of subjects.

Other studies have demonstrated significant variation in IRB policies and practices. For example, Resnik et al. (2012) found significant variation in how IRBs define “minor changes” to previously approved research protocols. The federal regulations allow IRBs to approve minor changes to previously approved research through an expedited review procedure instead of full board review, but do not define “minor changes” (45 CFR 46.110, Department of Health and Human Services 2009). As a result, some institutions may require full board review of amendments that others review on an expedited basis. Since the full IRB may detect some ethical and scientific problems with research that are not caught by expedited review, variability in the definition of “minor changes” may lead to patently unequal protections of human subjects (Resnik et al. 2012).

Grady et al. (2005) documented widespread variation in payment for participation in research. Dollar amounts ranged from $5 to $2000, depending on the amount of time involved and the risks and discomforts of research. Different sites in multisite studies offered differing amounts of payment, and studies at the same institution offered different amounts for studies involving similar procedures. Payment for research participation can lead to patently unequal treatment of human subjects if the amount of money offered affects how participants weigh the benefits and risks of participation (Grady 2005). The federal research regulations do not cover the topic of payment to research participants (Department of Health and Human Services 2009).

Glickman et al. (2011) found significant variation in institutional policies for including subjects with limited English proficiency in clinical research. 84 % of required that the main consent document be translated while 16 % did not. 70 % specified when a shorter version of the consent form may be used while 30 % did not. 32 % stated fluency requirements for the person obtaining consent while 68 % did not. Variations in policies for including subjects with limited English proficiency in research can results in unequal treatment of human participants when they lead to differences in the consent process.

Resnik et al. (2014) conducted a survey that demonstrated significant variation in institutional compensation for injury policies. They found that 51.2 % of 169 responding institutions offered no compensation; 8.1 % offered compensation at the discretion of the institution or sponsor; 36.9 % offered compensation if certain conditions are met (such as there is an agreement with the sponsor to provide compensation); and 3.8 % offered compensation without discretion or conditions. Their study suggests that participants in the same multisite study could receive compensation or no compensation, depending on the site’s policy. The federal research regulations do not require institutions, sponsors, or investigators to compensate subjects for research related injuries, although they do require subjects in more than minimal risk studies to be informed about the compensation for injury that is available, if any (45 CFR 45.116a6, Department of Health and Human Services 2009). Variation in compensation for research-related injury policies also can lead to patently unequal treatment of human subjects because compensation impacts the risks and benefits that subjects receive as a result of their participation (Pike 2014). Subjects who receive no compensation for their injuries may have to pay for their own medical care if their insurer does not cover research-related injuries. Subjects who receive no compensation may suffer additional harms if they cannot pay for their medical treatment (Pike 2014; Resnik et al. 2014).

Although there have been no systematic studies of international variation in the review and oversight of research involving human subjects, anecdotal evidence suggests that it occurs. For example, numerous authors have written about variations in consent practices in different countries (Marshall 2006). According to the Western model, consent should be obtained from the individual who is participating in research or from that individual’s legal representative if he or she is unable to consent. Consent should be documented using a form that describes various aspects of the research. The form should be signed by the participant or the participant’s representative (Levine 1988, 1991; Department of Health and Human Services 2009). In some African and Asian countries, the usual practice is to involve tribal leaders, the family, or the community in the consent process (Marshall 2006; Krogstad et al. 2010). In some Islamic societies, the spouse or other family members may consent for a woman (Marshall 2006; Afifi 2007; Krogstad et al. 2010). In some countries there is usually no documentation of the consent process because most people are illiterate or the society follows an oral tradition (Krogstad et al. 2010). These cultural aspects of consent raise the issue of whether the Western model should always be followed (Levine 1991). One might argue that there should be some local variation in consent that takes into account cultural differences (Krogstad et al. 2010). If these cultural differences are viewed as ethically relevant, they would justify differential treatment of human research subjects, which would not be considered patently unequal treatment of human research subjects. These differences might be viewed as allegedly unequal treatment if there is a serious dispute concerning the general applicability of the Western model of consent. International guidelines may help to resolve such disagreements (Council for the International Organizations of Medical Sciences 2002; World Medical Association 2013).

Conclusion

Unequal treatment of human research subjects is a significant ethical concern, because justice in research involving human participants requires equal protection of rights and equal protection from harm or exploitation. Allegedly unequal treatment occurs when subjects are treated differently and there is a genuine dispute concerning the appropriateness of equal treatment. Allegedly unequal treatment is likely to continue to occur in research with human subjects due to disagreements about fundamental questions of justice. The best way to deal with allegedly unequal treatment is to promote honest and open discussions about the questions under dispute. Opposing parties may be able to resolve their disagreements as they learn more about the facts and issues and consider different points of view. Patently unequal treatment occurs when subjects are treated differently and there is not a genuine dispute about the appropriateness of equal treatment.

Research regulations can help to minimize patently unequal treatment by providing rules for investigators, ethical review boards, institutions, and sponsors to follow. However, patently unequal treatment may still occur because the regulations are subject to interpretation. Federal agencies have provided guidance on how to interpret the regulations, which can help promote consistent review and oversight of human subjects research. Additional direction may be needed on topics that are not adequately covered by current guidance or regulations, such as research risks, payments, compensation for injury, and changes to approved protocols. International ethics guidelines can help promote equal treatment of human research subjects around the globe.

Some commentators have argued that one way of minimizing patently unequal treatment of research subjects is to centralize IRB review (Gold and Dewa 2005, Silberman and Kahn 2011).7 One proposal would be to have a single IRB take responsibility for reviewing multisite research. For example, if a clinical trial is conducted at 25 sites, one institution could handle the IRB review and the other institutions would rely on that institution’s review (Silberman and Kahn 2011). While centralizing the IRB review process can play a crucial role in promoting equal treatment of human research subjects, it is still important to obtain input from institutions concerning the local context of research (e.g. social, cultural, or economic considerations), since this may impact recruitment, consent, and other aspects of a study. Institutions could provide information concerning the local context to a central IRB, which it could take into account in its decision making (Resnik 2012). Taking local context into account is especially important in international human studies, since cultural, social or other factors unique to the population may impact the conduct of research.

While it is important to promote equal protection of human research subjects, it is unrealistic and unwise to expect treatment to be uniform. Because investigators and IRB members have different views of justice in research and different understandings of regulations and guidelines, some variation in the review and oversight of human subjects research is inevitable. Taking local context (e.g. cultural, economic, or social factors) into account in research review may also lead to variations in treatment of human subjects. While minor variations in the treatment of research subjects should be tolerated and even welcomed, major ones (i.e. those that significantly impact human rights or welfare) should be avoided or minimized.

Footnotes

  1. 1.

    This article focuses on the Department of Health and Human Services’ regulations, otherwise known as the Common Rule because they have been adopted by 17 federal agencies. Although the Food and Drug Administration has not adopted the Common Rule, it has regulations that are very similar to the Common Rule (Food and Drug Administration 2013a,b).

  2. 2.

    There is not sufficient space in this article to describe theories of justice in detail. For review, see Barnes 1996, Rhodes et al. 2002, Rhodes 2005, Sandel 2007, Sen 2011.

  3. 3.

    Egalitarians take different perspectives on equality, ranging from equality of socioeconomic goods (Nielsen 1979), to equality of opportunity (Rawls 1971), to equality of basic capabilities (Sen 2011).

  4. 4.

    Of course, some may not think that consistency is inherently valuable. As Ralph Waldo Emerson (1841) once said: “A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.”

  5. 5.

    Under the Common Rule, certain kinds of studies, such as some types of research involving surveys or the analysis of existing de-identified samples or data, are deemed to be exempt from IRB review (45 CFR 46.101). The Common Rule allows IRBs to forego full board review and use an expedited procedure to review new studies deemed to be minimal risk (45 CFR 46.110, Department of Health and Human Services 2009).

  6. 6.

    This article focuses on US federal regulations. Other countries have similar rules (Office of Human Research Protections 2013b).

  7. 7.

    Though I mention efforts in the US toward centralization, European nations have taken similar steps (Klitzman 2011).

Notes

Acknowledgments

I would like to thank Bruce Androphy for helpful comments. This research supported by the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS or NIH.

References

  1. Afifi, R.Y. 2007. Biomedical research ethics: An Islamic view part II. International Journal of Surgery 5(6): 381–383.CrossRefGoogle Scholar
  2. Angell, M. 1997. The ethics of clinical research in the third world. New England Journal of Medicine 337(12): 847–849.CrossRefGoogle Scholar
  3. Aristotle. (2003) [350 BCE]. Nichomachean ethics, ed. H. Tredennick, Transl. J.A. Thomson. New York: Penguin Books.Google Scholar
  4. Ballantyne, A.J. 2010. How to do research fairly in an unjust world. American Journal of Bioethics 10(6): 26–35.CrossRefGoogle Scholar
  5. Barnes, B. 1996. Justice as impartiality. New York: Oxford University Press.Google Scholar
  6. Berlin, I. 1955/1956. Equality. Proceedings of the Aristotelian Society 56: 301–326.Google Scholar
  7. Bonham, V., and J. Moreno. 2011. Research with captive populations: Prisoners, students, and soldiers. In The Oxford textbook of clinical research ethics, ed. E.J. Emanuel, C. Grady, R.A. Crouch, R.K. Lie, F.G. Miller, and D. Wendler, 461–474. New York: Oxford University Press.Google Scholar
  8. Brandt, R.B. 1992. Morality, utilitarianism, and rights. Cambridge: Cambridge University Press.Google Scholar
  9. Carr, C.L. 1981. The concept of formal justice. Philosophical Studies 39(3): 211–226.CrossRefGoogle Scholar
  10. Council for International Organizations of Medical Sciences. 2002. International ethical guidelines for biomedical research involving human subjects. http://www.cioms.ch/publications/layout_guide2002.pdf. Accessed 17 Dec 2013.
  11. Department of Health and Human Services. 2009. Protection of human subjects. 45 CFR 46. http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html. Accessed 15 Dec 2013.
  12. Dworkin, R. 2000. Sovereign virtue: The theory and practice of equality. Cambridge, MA: Harvard University Press.Google Scholar
  13. Emanuel, E.J., D. Wendler, and C. Grady. 2000. What makes clinical research ethical? Journal of the American Medical Association 283(20): 2701–2711.CrossRefGoogle Scholar
  14. Emerson, R.W. 1841. Self-reliance. http://www.emersoncentral.com/selfreliance.htm. Accessed 15 Apr 2014.
  15. Feinberg, J. 1987. Harm to others. New York: Oxford University Press.Google Scholar
  16. Food and Drug Administration. 2012. Information Sheet Guidance for Institutional Review Boards (IRBs), Clinical investigators, and sponsors. http://www.fda.gov/ScienceResearch/SpecialTopics/RunningClinicalTrials/GuidancesInformationSheetsandNotices/ucm113709.htm. Accessed: 15 Dec 2013.
  17. Food and Drug Administration. 2013a. Institutional Review Boards. 21 CFR 56. http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=56. Accessed 15 Dec 2013.
  18. Food and Drug Administration. 2013b. Protection of Human Subjects. 21 CFR 50. http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=50. Accessed 15 Dec 2013.
  19. Glickman, S.W., N. Ndubuizu, K.P. Weinfurt, C.D. Hamilton, L.T. Glickman, K.A. Schulman, and C.B. Cairns. 2011. Perspective: The case for research justice: inclusion of patients with limited English proficiency in clinical research. Academic Medicine 86(3): 389–393.CrossRefGoogle Scholar
  20. Gold, J.L., and C.S. Dewa. 2005. Institutional review boards and multisite studies in health services research: is there a better way? Health Services Research 40: 291–307.CrossRefGoogle Scholar
  21. Goodin, R.E. 1985. Protecting the vulnerable: A reanalysis of our social responsibilities. Chicago: Universityof Chicago Pres.Google Scholar
  22. Gosepath, S. 2007. Equality. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/equality/. Accessed 6 Dec 2013.
  23. Grady, C. 2005. Payment of clinical research subjects. Journal of Clinical Investigation 115(7): 1681–1687.CrossRefGoogle Scholar
  24. Grady, C., and C. Denny. 2011. Research involving women. In The Oxford textbook of clinical research ethics, ed. E.J. Emanuel, C. Grady, R.A. Crouch, R.K. Lie, F.G. Miller, and D. Wendler, 407–422. New York: Oxford University Press.Google Scholar
  25. Grady, C., N. Dickert, T. Jawetz, G. Gensler, and E.J. Emanuel. 2005. An analysis of U.S. practices of paying research participants. Contemporary Clinical Trials 26(3): 365–375.CrossRefGoogle Scholar
  26. Green, L.A., J.C. Lowery, C.P. Kowalski, and L. Wyszewianski. 2006. Impact of institutional review board practice variation on observational health services research. Health Services Research 41: 214–230.CrossRefGoogle Scholar
  27. Iltis, A. 2007. Pediatric research posing a minor increase over minimal risk and no prospect of direct benefit: challenging 45 CFR 46.406. Accountability in Research 14(1): 19–34.CrossRefGoogle Scholar
  28. Jansen, L.A., and S. Wall. 2013. Rethinking exploitation: A process-centered account. Kennedy Institute of Ethics Journal 23(4): 381–410.CrossRefGoogle Scholar
  29. Klitzman, R. 2011. How local IRBs view central IRBs in the US. BMC Medical Ethics 12: 13.CrossRefGoogle Scholar
  30. Kopelman, L.M. 2000. Moral problems in assessing research risk. IRB 22(5): 7–10.CrossRefGoogle Scholar
  31. Krogstad, D.J., S. Diop, A. Diallo, F. Mzayek, J. Keating, O.A. Koita, and Y.T. Touré. 2010. Informed consent in international research: The rationale for different approaches. American Journal of Tropical Medicine and Hygiene 83(4): 743–747.CrossRefGoogle Scholar
  32. Levine, R.J. 1988. Ethics and regulation of clinical research, 2nd ed. New Haven: Yale University Press.Google Scholar
  33. Levine, R.J. 1991. Informed consent: Some challenges to the universal validity of Western model. Law, Medicine and Health Care 19: 107–213.Google Scholar
  34. Lurie, P., and S.M. Wolfe. 1997. Unethical trials of interventions to reduce perinatal transmission of the human immunodeficiency virus in developing countries. New England Journal of Medicine 337(12): 853–856.CrossRefGoogle Scholar
  35. Mansbach, J., U. Acholonu, S. Clark, and C.A. Camargo Jr. 2007. Variation in institutional review board responses to a standard, observational, pediatric research protocol. Academic Emergency Medicine 14: 377–380.CrossRefGoogle Scholar
  36. Marshall, P.A. 2006. Informed consent in international health research. Journal of Empirical Research on Human Research Ethics 1(1): 25–42.CrossRefGoogle Scholar
  37. Martinson, B.C., M.S. Anderson, A.L. Crain, and R. de Vries. 2006. Scientists’ perceptions of organizational justice and self-reported misbehaviors. Journal of Empirical Research on Human Research Ethics 1(1): 51–66.CrossRefGoogle Scholar
  38. Mastroianni, A., and J. Kahn. 2001. Swinging on the pendulum. Shifting views of justice in human subjects research. Hastings Center Report 31(3): 21–28.CrossRefGoogle Scholar
  39. McWilliams, R., J. Hoover-Fong, A. Hamosh, S. Beck, T. Beaty, and G. Cutting. 2003. Problematic variation in local institutional review of a multicenter genetic epidemiology study. Journal of the American Medical Association 290: 360–366.CrossRefGoogle Scholar
  40. Mill, J.S. 2003 [1859, 1863]. Utilitarianism and on liberty. New York: Wiley-Blackwell.Google Scholar
  41. Muniyappa, R., S. Lee, H. Chen, and M.J. Quon. 2008. Current approaches for assessing insulin sensitivity and resistance in vivo: Advantages, limitations, and appropriate usage. American Journal of Physiology, Endocrinology, and Metabolism 294(1): E15–E26.CrossRefGoogle Scholar
  42. National Bioethics Advisory Commission. 1998. Research Involving Persons with Mental Disorders that May Affect Decisionmaking Capacity. https://bioethicsarchive.georgetown.edu/nbac/capacity/TOC.htm. Accessed 19 Apr 2014.
  43. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Washington, DC: Department of Health, Education, and Welfare. http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html. Accessed 6 Dec 2013.
  44. Nielsen, K. 1979. Radical egalitarian justice: Justice as equality. Social Theory and Practice 5(2): 209–226.CrossRefGoogle Scholar
  45. Nozick, R. 1975. Anarchy, state, utopia. New York: Basic Books.Google Scholar
  46. Office of Human Research Protections. 2013a. Policy and guidance. http://www.hhs.gov/ohrp/policy/index.html. Accessed: 14 Dec 2013.
  47. Office of Human Research Protections. 2013b. 2014 Edition of the International Compilation of Human Research Standards. http://www.hhs.gov/ohrp/international/intlcompilation/2014intlcomp.doc.doc. Accessed 15 Dec 2013.
  48. Pike, E.R. 2014. In need of remedy: US policy for compensating injured research participants. Journal of Medical Ethics 40(13): 182–185.Google Scholar
  49. Rawls, J. 1971. A theory of justice. Cambridge, MA: Harvard University Press.Google Scholar
  50. Rawls, J. 1993. Political liberalism. New York: Columbia University Press.Google Scholar
  51. Resnik, D.B. 1998. The ethics of HIV research in developing nations. Bioethics 12(4): 285–306.Google Scholar
  52. Resnik, D.B. 2003. Exploitation in biomedical research. Theoretical Medicine and Bioethics 24(3): 233–259.Google Scholar
  53. Resnik, D.B. 2005. Eliminating the daily life risks standard of minimal risk. Journal of Medical Ethics 31(1): 35–38.Google Scholar
  54. Resnik, D.B. 2012. Centralized institutional review boards: Assessing the arguments and evidence. Journal of Clinical Research Best Practices 8(11): 1–13.Google Scholar
  55. Resnik, D.B., G. Babson, and G.E. Dinse. 2012. Minor changes to previously approved research: a study of IRB policies. IRB 34(4): 9–14.Google Scholar
  56. Resnik, D.B., E. Parasidis, K. Carroll, J.M. Evans, E.R. Pike, and G.E. Kissling. 2014. Research-related injury compensation policies of U.S. research institutions. IRB 36(1): 12–20.Google Scholar
  57. Rhodes, R. 2005. Justice in medicine and public health. Cambridge Quarterly of Healthcare Ethics 14(1): 13–26.CrossRefGoogle Scholar
  58. Rhodes, R. 2010. Rethinking research ethics. American Journal of Bioethics 10(10): 19–36.CrossRefGoogle Scholar
  59. Rhodes, R., M.P. Battin, and A. Silvers. 2002. Medicine and social justice. New York: Oxford University Press.Google Scholar
  60. Sandel, M.J. (ed.). 2007. Justice: A reader. New York: Oxford University Press.Google Scholar
  61. Sen, A. 2011. The idea of justice. Cambridge, MA: Harvard University Press.Google Scholar
  62. Shah, S., A. Whittle, B. Wilfond, G. Gensler, and D. Wendler. 2004. How do institutional review boards apply the federal risk and benefit standards for pediatric research? Journal of the American Medical Association 291(4): 476–482.CrossRefGoogle Scholar
  63. Silberman, G., and K.L. Kahn. 2011. Burdens on research imposed by institutional review boards: The state of the evidence and its implications for regulatory reform. Milbank Quarterly 89: 599–627.CrossRefGoogle Scholar
  64. Snyder, J., C.L. Miller, and G. Gray. 2011. Relative versus absolute standards for everyday risk in adolescent HIV prevention trials: Expanding the debate. American Journal of Bioethics 11(6): 5–13.CrossRefGoogle Scholar
  65. Stark, A.R., J.E. Tyson, and P.L. Hibberd. 2010. Variation among institutional review boards in evaluating the design of a multicenter randomized trial. Journal of Perinatology 30(3): 163–169.CrossRefGoogle Scholar
  66. Varmus, H., and D. Satcher. 1997. Ethical complexities of conducting research in developing countries. New England Journal of Medicine 337(12): 1000–1005.Google Scholar
  67. World Medical Association. 2013. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects, 2013 revision. http://www.wma.net/en/30publications/10policies/b3/. Accessed: 17 Dec 2013.

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.National Institute of Environmental Health SciencesNational Institutes of HealthResearch Triangle ParkUSA

Personalised recommendations