Journal of Consumer Policy

, Volume 32, Issue 4, pp 355–379 | Cite as

Nanoethics: Old Wine, New Bottles?

Original Paper

Abstract

This paper reviews the question of whether “nanoethics” should be treated as a special essay in ethics, quite different to bioethics, cyberethics, or neuroethics. Whilst some believe that a fundamental rethinking of our ethics is needed, others conclude that ethics as applied to nanoproducts or to nanomedicine will prove to be largely a case of business as usual. The paper is in four principal parts. In the first part, the basic pattern of ethical argument is set out, a pattern that holds no matter which of the emerging technologies is under debate. In the second part, a sketch is offered of the way in which precautionary reasoning plays relative to three key ethical constituencies (utilitarian, rights-led, and dignitarian), including some reflections on how these constituencies would view the so-called precautionary principle (as a guide to regulators). In the third part, the essential features of a particular kind of ethical community, (a “community of rights”) are outlined, such a community being put forward as the appropriate setting for debating matters of nanorisk and regulation. In such a community, the protection of rights is focal, ethics and regulation are viewed as deeply connected discourses, and the aspiration is to develop regimes of control and compensation that are fully integrated and coherent. Finally, there is a discussion of the way in which this kind of community, with its rights-led approach to precaution, would address questions concerning liability for nanoproducts and nanomedical services in a context of profound uncertainty.

Keywords

Nanoethics Nanoproducts Nanomedicine Precaution Product liability 

Introduction

If some of the several hundred nanoproducts that are already in circulation1 or that in future are put into circulation should prove to be dangerous to consumers—if nanoparticles in cosmetics or nanofibres in clothes and sports equipment should prove to be a new kind of asbestos—then those consumers who are injured would not make their compensatory claims in either a regulatory or an ethical void. Similarly, if medical procedures that make use of nanomaterials or nanodevices should prove injurious to participants in research trials or to patients, then the injured parties would not make their compensatory claims in a normative void; once again, such claims would engage existing regulatory and ethical standards.

So far as current regulatory standards are concerned, we should distinguish between ex ante and ex post provision. Ex ante, regulators seek to ensure that consumer products and medical procedures are reasonably safe; ex post, they seek to ensure that consumers and patients who are injured by dangerous products or procedures are fairly compensated; and, in an ideal world, they try to bring together ex ante and ex post provision in a coherent scheme of protective and corrective regulation. It is quite likely that developments in nanotechnology will expose a number of weaknesses in existing provision as a result of which some fine-tuning and clarification of the regulatory regime will be necessary.2 Quite how extensive this remedial work will need to be remains to be seen.

Turning to our stock of ethical standards, there is no shortage of principles and values that are advocated as representing the benchmark for legitimate action. There will be many views about the degree of precaution that should be exercised in relation to nanotechnologies, about whether it is right to put nanoproducts into circulation or to conduct research into nanomedical devices, and the like; as, indeed, there will be many views about whether it is right that consumers, research participants, or patients who are harmed should be compensated. Although we have an abundance of ethical resources, it is a moot point whether we should view “nanoethics” as a special essay in ethics quite different to bioethics, cyberethics, or neuroethics. Whilst some believe that we need to fundamentally rethink our ethics, others conclude that ethics as applied to nanoproducts or to nanomedicine will prove to be largely a case of business as usual.

Implicit in these remarks is the further question of how we should understand the connection between ethical debates concerning nanotechnology and the equivalent regulatory debates. For some, these are to be treated as parallel but disconnected debates: Whilst ethicists engage in academic reflection about the right way to deal with nanotechnologies, regulators actually set the ruling standards. Against this view, however, I will proceed on the basis that regulation is to be understood as an exercise in applied ethics. If regulators are to set legitimate standards, they need to engage in ethical reflection. Accordingly, although this paper will focus on the question of whether we need to reinvent ethics to deal with nanotechnologies, on my approach, this is an issue that concerns regulators as much as it concerns ethicists.

Turning to the focal question—the question of whether we need to reinvent ethics to deal with nanotechnologies—my position in this paper follows the line that I took in Rights, Regulation and the Technological Revolution.3 There, I suggested that the basic pattern of ethical deliberation and debate is the same whether we are assessing biotechnology, information technology, neurotechnology, or nanotechnology, or indeed convergent technologies. According to this view, “nanoethics” is not a new kind of ethics; it is simply the application of familiar ethical principles to a new set of questions that are presented by the development and application of nanotechnologies.4 Or, at any rate, I take it that this is the case so long as the overall context for ethical reflection—a context that presupposes, on the one hand, subjects who self-identify as reflective and responsible agents and, on the other hand, technological objects that are distinguishable from their judging subjects—is not fundamentally destabilized.5

Yet, is this not a view that is somewhat complacent and too sanguine? Actually, it is not at all complacent or sanguine; for even if nanoethics is nothing new, it does not mean that we have an ethical consensus. Indeed, the position is quite to the contrary. The basic pattern of ethical debate to which I have referred is deeply conflictual, and, so long as this is the case, regulators have an extraordinarily difficult challenge in defending the legitimacy of their preferred regulatory positions.

That said, it might still be thought that my position is too complacent in the sense that nanotechnologies will present us with particular ethical dilemmas that we have not contemplated and that we will have difficulty in relating to. Hence, it might be thought that, rather as the US Supreme Court in Reno6 posed the question, “What is the Internet like?”, we will find that nanotechnologies will prompt us to ask, “What is this nanotechnology and its application like?” Arguing in this way, the philosopher James Moor has suggested that we will find ourselves in one policy vacuum after another, conceptually muddled, and unable to get our ethical bearings (Moor 2008). For example, using the illustration of Wi-Fi “wardriving” (connecting wirelessly to another person’s network), Moor says:

As we consider possible policies on wardriving, we begin to realize there is a lack of conceptual clarity about the issue. Wardriving might be regarded as trespassing. After all, apparently the wardriver is invading someone’s computer system that is in a private location. Conceptually, this would seem to be a case of trespass. But the wardriver may understand it differently. The electronic transmission is in a public street and the wardriver remains on the public street. He is not entering the dwelling where the computer system is located. Indeed, he may be nowhere nearby. In searching for a new policy, we discover we have a conceptual muddle. We find ourselves torn among different conceptualisations, each of which has some plausibility (Moor 2008, p. 33).

Whilst there undoubtedly will be nanoethical puzzles of this kind (where there are problems of analogy, characterisation, or fact and degree, and the like), I do not see these as issues that call for a wholly new ethics. Rather, if there are distinctive ethical problems awaiting us in the early-stage rollout of nanotechnologies, I suspect that they lie in the deep uncertainty (even the intrinsic unpredictability)7 that is associated with this technology. This is an uncertainty that will need to be coherently factored into our ethical deliberations,8 and, indeed, James Moor (with John Weckert) has already offered some helpful reflections on the nature of such deliberations.9 So, to return to our original hypotheticals, imagine that the defence advanced in a nanoproduct or a nanomedicine claim is that, in the present state of knowledge, it is simply not possible to make the product safer or the procedure less risky. If this is so, we might have some sympathy with the defendants and think that, as a matter of both law and ethics, our judgments of the parties’ rights and responsibilities should take due account of the uncertain context in which both claimants and defendants are operating.

The paper is in four principal parts. In the first part, I set out what I see as the basic pattern of ethical argument. This, I repeat, is not a pattern of ethical consensus. However, it is to my mind the pattern that holds whichever of the emerging technologies we are debating. In the second part, drawing on the analysis in the first part, I sketch the way in which precautionary reasoning plays relative to three key ethical constituencies (utilitarian, rights-led, and dignitarian) as well as reflecting on how these constituencies would view the so-called precautionary principle (as a guide to regulators). In the third part, I outline the kind of ethical community, the “community of rights” as I term it, in which I believe we should be debating such matters of nanorisk and regulation. In such a community, the protection of rights is focal, ethics and regulation are viewed as deeply connected discourses, and the aspiration is to develop regimes of control and compensation that are fully integrated and coherent. Finally, I consider how a community of rights, with a rights-led approach to precaution, might address questions concerning liability for nanoproducts and nanomedical services in a context of profound uncertainty.

Ethics and Nanoethics: the Basic Frame

Henk ten Have has rightly remarked that the “current revolution in science and technology has led to the concern that unbridled scientific progress is not always ethically acceptable” (ten Have 2006). Responding to this concern, the International Bioethics Committee of the United Nations Educational, Scientific and Cultural Organization (UNESCO) has been in the vanguard of attempts to forge a worldwide bioethical consensus, publishing three major instruments, namely the Universal Declaration on the Human Genome and Human Rights in 1997, the International Declaration on Human Genetic Data in 2003, and most significantlyor, at any rate, certainly most ambitiouslythe Universal Declaration on Bioethics and Human Rights in 2005.10 Although the latest of these Declarations is addressed specifically to “ethical issues related to medicine, life sciences and associated technologies,”11 it presents us with a starting point for thinking about the legitimacy not only of regulation directed at biotechnology but also of the regulation of neurotechnology and nanotechnology (at any rate, in its biomedical applications). Moreover, even if information technology does not seem like quite such an obvious candidate for a bespoke Declaration on “cyberethics,”12 there are clearly ethical concerns relating to IT (especially, human rights concerns),13 and, as technologies converge and combine, the Declaration might offer a useful reference point for debating newly arising ethical concerns.

Against this background, I take it that the basic matrix—the matrix that sets the mould for ethical debates—involves three essential forms, namely goal-orientated (consequentialism), rights-based, and duty-based forms. It follows that the form of an ethical argument will either prioritize some end-state goals or it will start with a declaration of rights or a declaration of duties.

Each form within the matrix is a mould or a shell, open to substantive articulation in many different ways: different goals, different rights, and different duties may be specified. Nevertheless, in principle, the basic pattern of ethical debate, whatever the particular technological focus—whether it be biotechnology (and bioethics), information and communications technology (and cyberethics), nanotechnology (and nanoethics), or neurotechnology (and neuroethics)—is governed by this matrix.

Although, in principle, the matrix sets the pattern, in practice, it does not follow that the matrix is always fully expressed in debates about the ethics of new technologies. Often, we find only a two-sided debate with utilitarian cost/benefit calculations being set against human rights considerations. In general, unless there are major safety concerns, whilst utilitarians will assert the “green light” ethics of proceeding, a case of promotion (of the technology) modulated by a degree of precaution,14 human rights theorists will take an “amber light” approach insisting that the technological traffic pauses (to ensure rights clearance) before proceeding.

By contrast, in relation to debates concerning the ethics of modern biotechnology, we have a three-way articulation of the matrix, the key substantive positions being utilitarian, human rights, and dignitarian. Here, in this distinctive bioethical triangle, we find the dignitarian alliance taking issue with both utilitarians and human rights advocates. Whilst the latter can sometimes find a common position, it is much more difficult to reach an accommodation with the dignitarians. For, according to the dignitarian ethic, some technological applications are, quite simply, categorically and non-negotiably unacceptable. In this sense, of the three ethical perspectives, it is only the dignitarian view that is genuinely “red light.”

Somewhat confusingly, the idea of human dignity underlies both the human rights and the dignitarian view. However, once this double take is identified, it is easier to see which version of human dignity is being contended for or presupposed. Moreover, with the bioethical triangle as our reference map, we can track and locate the positions taken on particular issues such as the use of human embryos as research tools or the recognition of proprietary rights over removed body parts and tissues. But, of course, none of this makes it any the easier for regulators who are trying, against the backcloth of this contested plurality, to strike regulatory positions that meet the demands of all three constituencies. Nor, it seems, do regulators always find it easy to articulate the details of regulatory positions in a way that is entirely consistent with those sections of the plurality that they are privileging.15

Before moving on to the next part of the discussion, let me respond briefly to those who might object to my account of the frame of ethical reasoning on the ground that it is either non-exhaustive or incomplete. First, I should emphasize that the particular substantive articulations that are represented in the bioethical triangle are by no means exhaustive of all substantive ethical possibilities. In other words, the bioethical triangle should be viewed as a particular conjunction of ethical form and substance that reflects the way in which certain positions have come to dominate modern bioethical discourse and debate. I do not doubt that, in another time and possibly in relation to other technologies, other voices will be heard.

Secondly, some may accuse my matrix of being incomplete. Most obviously, some might object that virtue ethics, an important branch of ethics on anyone’s view, does not feature in the matrix. This is a good point. However, the matrix is one of act ethics rather than agent ethics. As I understand it, virtue ethics is focussed on the rightness of an agent’s dispositions and character (agent ethics) rather than on the rightness of an agent’s acts. In the absence of act ethic omniscience, virtue ethics is hugely important in practice. However, it can be triangulated in the same manner as act ethics so that the virtues highlight trying to promote certain goals, respect rights, or comply with duties, and, in the case of a community of rights, virtue ethics seeks to educate agents into an attitude of trying to do the right thing relative to the rights that they and others have in that community.

Three Approaches to Precaution

One of the most significant corollaries of my analysis of the basic pattern of ethical reasoning is that notions such as “harm to others,” “informed consent,” “precaution,” and “proportionality,” which commonly figure in ethical arguments, are not neutral. Rather, we have to read each of these ideas through the lens of the particular substantive articulation of the matrix. Given that so much of our concern in this paper will be with the uncertain hazards and risks associated with nanotechnologies, we need to be clear about the way in which this corollary works out when applied to the idea of precaution. Accordingly, in this part of the paper, I first sketch the different ways in which the principal ethical constituencies view the idea of precaution, the essential point being that precaution plays differently depending upon where we are coming from in the bioethical triangle, and then I consider how these constituencies would view the precautionary principle as a candidate for the guidance of regulators.

Utilitarian, Human Rights, and Dignitarian Precautionary Approaches

Formally, a precautionary approach implies that, in relation to some act or practice, some measure of precaution (whether regulatory prohibition, or a moratorium, or piloting and monitoring, or insurance, or liability, and so on)16 should be taken lest there be a negative and undesired impact on some valued principle or state of affairs. Stated shortly, if one is a utilitarian, the valued states of affairs will be those in which utility is maximized; if one is a human rights theorist, the valued principles will be ones relating to the protection or promotion of human rights; and, if one is a dignitarian, it will be the compromising of human dignity that is the negative impact to be avoided.

To put the same point in slightly different terms, if we understand precaution as an aspect of risk management, each of the leading ethical constituencies will have its own account of the values that are not to be put at risk. We can say a few words about the distinctive approach to precaution that is characteristic of each of these ethics.

First, for utilitarians, precaution speaks to a concern about safety (that is, safety relative to any of the conditions that are material to the maximization of pleasure and the avoidance of pain). However, unless there is a rule-utilitarian justification for adopting a strict precautionary approach in an identifiable class of situations, the precautionary calculation should not be carried out in a one-sided way.17 For the utilitarian, each option must be reviewed. If it is proposed that we should give up on certain applications of nanotechnology, it is not enough to plead precaution or risk aversion or distress avoidance. The utilitarian will want to know what the costs of precaution are (quite literally, how much it is costing to buy a bit more safety and what benefits are being foregone), and the anticipated net result of the precautionary option will have to be superior to that of any alternative option.

If one were to apply precaution also to the benefit side of a utilitarian calculation, it might dampen our enthusiasm for those technologies that promise a great deal, not in the short term but in the longer term. For example, in the Harvard Onco-Mouse case,18 the utilitarian-minded examiners thought it perfectly sensible to weigh in the moral balance (that they took to be required by Article 53(a) of the European Patent Convention) the distress occasioned to a genetically engineered mouse (the mouse being a test animal for cancer research). However, they were persuaded that the potential benefits to humans, relieved of a devastating human disease, outweighed the clear and present distress to the mice. If the potential benefits of the research could be equated with actual benefits, then that might make the outcome more plausible; but one thing that we do know about research is that there are no guarantees that it will deliver its prospective benefit within the projected timeframe, indeed if ever. Accordingly, where we encounter hype and hope, as we frequently do in relation to modern technologies, a balanced precautionary approach within utilitarianism would have to discount for uncertainty on both sides of the risk/benefit calculation.

Secondly, for proponents of human rights, precaution speaks to a concern about possible infringements of human rights. Classically, for example, there is the question of how much precaution (qua due process) we should incorporate into the design of the criminal justice system lest we cause innocent persons to be arrested and detained, prosecuted, convicted, or punished. Similarly, where we can predict that there is, say, a one in three chance that a convicted offender will re-offend on release, might we justify a precautionary and (by intent) protective extension of the tariff prison term?19 In the same way that the utilitarians want to consider what price (qua loss of utility) is being paid for a precautionary option, human rights theorists will want to know whether precautionary protection of rights comes at an acceptable price relative to competing or conflicting rights.

Human rights ethicists, too, will be sensitive to the argument that precaution should be exercised where there is a doubt about the moral status of a life form (human or otherwise) that is not a paradigmatic rights bearer. For example, it might be argued that precaution should be exercised when dealing with human neonates, human fetuses, human embryos (or human/non human chimeras),20 and primates because we might be wrong in assuming that they are not bearers of rights,21 or, again, it might be suggested that we need to be careful that treating human embryos (or chimeras) as a research tool does not change our attitudes so that we suffer a loss of respect for fellow rights holders. How far we might run with these particular arguments is moot, but there is certainly a precautionary form of argument available and waiting to be fully articulated within human rights thinking.

Finally, what do the dignitarians make of precaution? Dignitarians regard the compromising of human dignity as a self-standing reason for restraint—precaution does not enter into such an ethic in quite the same way. Crucially, dignitarianism (unless it is being operated disingenuously) is not interested in consequential calculation, whether of the likely costs and benefits (in the way that a utilitarian impact statement might be drafted) or with regard to the effect on rights.22 Dignitarianism, it cannot be emphasized too strongly, is a red light not an amber light ethic; its credo is that we should not proceed at all (where activities are judged categorically to compromise human dignity) rather than that we should proceed only with care. For dignitarians, the proposition that we should exercise precaution against the risk that biotechnology or nanotechnology might go wrong misses the point; the point is that if biotechnology or nanotechnology goes right and if in doing so human dignity is compromised, then that is all that we need to know.23

The Precautionary Principle

Famously, Principle 15 of the Rio Declaration provides: “Where there are threats of serious or irreversible damage, lack of full scientific certainty should not be used as a reason for postponing such [precautionary] measures.” Although this is probably the best-known version of the precautionary principle, there are now so many articulations of the principle that it has become an easy target for its critics.24

The standard critique of the precautionary principle highlights two kinds of weakness. Whilst the first weakness, a problem to which we have just adverted, is that the key variables are open to many different interpretations, the second is that it takes a one-sided approach to risk management. Elaborating this latter line of critique, Cass Sunstein has convincingly argued that, where the taking of precautionary measures—or, at any rate, the taking of precautionary measures that involve giving up some activity—itself involves risk (sacrifice), then this must be brought into what is otherwise a one-sided narrow screen calculation (Sunstein 2005).

With regard to the first of these difficulties, the problem is not so much the result of loose drafting as a matter, as we have seen, of different ethical constituencies giving the principle their own particular interpretation. Consider, for example, the focal notion of “serious damage.” For a utilitarian, this connotes serious disutility; for a human rights theorist, it connotes an infringement of one of the higher order rights (or damage to the conditions that sustain the interests protected by these rights); and, for a dignitarian, serious damage equates to the compromising of human dignity.

As for the second difficulty, we have seen already that, for both utilitarians and human rights theorists, a one-eyed calculation will not suffice: Unless we think that precaution must be exercised whatever the cost, we need to consider what we are giving up. Granted, for the dignitarians, this objection to the precautionary principle does not seem so compelling, but, then, we might say that the dignitarians do not really espouse the principle in what we take to be the usual context of risk assessment and risk management. Alternatively, we might view the dignitarians as having an approach that is closely analogous to the precautionary principle: For irreversible damage to the environment, read irreversible damage to human dignity; for catastrophic climate change, read catastrophic compromising of human dignity; and, in both cases, some cause for concern warrants precautionary measures being taken.

To return to the many formulations of the precautionary principle, the one thing that they have in common is the idea, as the Nuffield Council on Bioethics puts it, that regulators may “impose restrictions on otherwise legitimate commercial activities, if there is a risk, even if not yet a scientifically demonstrated risk, of environmental damage” (Nuffield Council on Bioethics 1999).25 The gist of the principle, in other words, is that precautionary interventions may be justified where there is some evidence of the supposed hazard or risk (quite where this evidential threshold lies is a moot point) even though we cannot be sure beyond all reasonable doubt (or perhaps, even, all possible doubt) that the supposed hazard or risk either does or does not exist. What would the different ethical constituencies make of this?

It seems to me that the ethical constituencies would accept that there is an element of good sense in the precautionary principle insofar as it suggests an inverse relationship between the seriousness of the damage and the evidential threshold. As we have said, each ethical constituency has its own take on the kind of damage that is relevant. Nevertheless, each constituency would accept that there is sense in setting the threshold for precaution at the lowest level when the feared damage is at the highest level (and, presumably, vice versa). This does not tell us where the lowest threshold lies, but the idea of a see-saw relationship between evidential threshold and supposed damage has some appeal.

Unfortunately, this does mean that, once we agree the thresholds, the precautionary principle can be implemented in a reasonably straightforward way, for we still need to apply the principle in a way that is sensitive to what we might be giving up by exercising precaution. We need to ask, in other words, whether precaution that is triggered would also be a proportionate response.26 Suppose, for example, that a pharmaceutical company develops a nanoproduct that might have life-threatening side effects. The see-saw calculation suggests that regulators should take a precautionary approach even though the evidence of hazard and risk is only weak. However, if the benefits of the product would be considerable, regulators will question whether their precautionary strategy is proportionate. Some might argue that the risk is so serious that market authorization must be delayed pending further trials of the product, but others might believe that this response is disproportionate and that the product should be authorized subject to a package of ex post precautionary measures.27

A Community of Rights

Faced with the bioethical triangle and the basic pattern of ethical reasoning, where should we anchor our reflections? For reasons that I will not rehearse here, I will root my reflections in what I call a community of rights, this being my benchmark for ethical and regulatory reflections.28

A community of rights is a particular kind of moral community. Hence, it must systematically embed a moral standpoint (in the formal sense), and because it is a community of rights, the substantive moral approach embedded is rights-led (rather than utility-maximizing or duty-driven). I also conceive of the community as a reflective and interpretive society, not so much a finished product as an ongoing process, the members of which are acutely aware that they are not morally omniscient. These defining characteristics call for some short elaboration, after which I will sketch how such a community might approach questions of precaution and proportion.

The Essential Characteristics of a Community of Rights

First, we are dealing with a moral community. No doubt there could be considerable debate about the precise specification of the generic characteristics of the formally-speaking “moral,” whether it is a moral standpoint or a moral community. However, I take it that a community of rights, as a moral community, must hold its commitments sincerely and in good faith, that it must treat its standards as categorically binding and universalizable, and that there must be an integrity and coherence about its commitments as a whole.

Secondly, as a community that is committed to the protection and promotion of individual rights, its moral approach is rights-led. In this respect, it distinguishes itself from two other rival instantiations of moral community, these being utilitarian and duty-driven communities. Crucially, this means that the interests of individuals will not be subordinated to the greater good, nor, on the other side, will producers and researchers be constrained by the kind of dignitarian duty-driven concerns that have become so influential in modern bioethics.29

A community of rights, so specified, might take quite a wide range of forms. Let me also stipulate, then, that, in a community of rights, a will (or choice) theory of rights, rather than an interest theory of rights, is adopted,30 and that the paradigmatic bearer of rights is one who has the developed capacity for exercising whatever rights are held, including making choices about whether to give or to refuse consent in relation to the rights that are held. Even with these further stipulations, the general concept of a community of rights might be articulated in a variety of particular conceptions (including, it should be said, a human rights conception)—that is to say, we might find conceptions with various epistemological bases (some more foundationalist than others), with different views about the status of non-paradigmatic rights holders, and especially with different arrays of recognized rights.31

I have also said that I conceive of a community of rights as a society that views itself as a process rather than a finished product. By this, I mean that it is a community that constantly keeps under review the question of whether the current interpretation of its commitments is the best interpretation. There is also an awareness by members of their limited knowledge and understanding; members do not regard themselves as morally omniscient; what seems like the best interpretation today might look less convincing tomorrow.

Finally, let me also underline a point that I made in my introductory remarks. In a community of rights, the discourse of ethics and regulation is regarded as both contiguous and continuous. Debates about the ethics of rights flow straight into the regulatory consciousness, and regulatory reflection on rights flows back into ethical debate. It is not enough that regulation is effective and fit for purpose; the first priority is that regulators should have the right purposes (rights-respecting purposes) and that the regulatory standards that are set are legitimate relative to the community’s rights values. In a community of rights, regulation, like ethics, is an enterprise that is dedicated to doing the right thing, and the thing that is right is the protection, preservation, and promotion of the community’s commitment to rights.

Questions for a Community of Rights

Without attempting to be exhaustive, some of the more pressing and recurring questions to be addressed, debated, and (at least, provisionally) resolved within a community of rights are the following: First, there is a large cluster of questions concerning which rights are to be recognized and what the scope of particular rights is.

Secondly, there are questions arising from conflicts between rights as well as from competition between rights holders. Sometimes the conflict might be between one kind of right and another—for example, between the right to privacy and the right to freedom of expression. At other times, there are competing rights—that is, cases where two rights holders present with the same general right. If relative need is the criterion, this might facilitate an easy resolution where, say, a seriously ill person claims a right to the one available hospital bed in competition with a less ill person.

Thirdly, because consent is an extremely important dynamic in a community of rights, the community needs to debate the terms on which a supposed “consent” will be recognized as valid and effective. In particular, how does the community interpret the requirement that consent should reflect an unforced and informed choice, and how is consent to be signalled (will an opt-out scheme suffice, for instance), and so on?32

Fourthly, a community of rights must debate whether there are limits to the transformative effect of the reception of rights.33 In particular, this invites reflection on the relationship between one community of rights and another.34

Finally, there is the vexed question of who has rights. Do young children, fetuses, or embryos have rights? What about the mentally incompetent or the senile? And, then, what about non-human higher animals, smart robots, and, in some future world, hybrids and chimeras of various kinds? Each community of rights must debate such matters, responding to the inclusionary question (who has rights?) as well as determining its approach to those life forms that are to be excluded.

Even though the members of a community of rights have a shared moral outlook, there is still plenty to debate. It is in this spirit that such a community would approach the question not only of how to do the right thing vis-a-vis nanotechnologies but also of how nanotechnologies, their research and development, and their application should be regulated.

Precaution and Proportion

How would a community of rights structure its thinking with regard to questions of precaution and proportion? Broadly speaking, I suggest that it would operate along the following lines: First, it would see some issues as relatively simple, one-dimensional, questions of precaution. The form of such questions is as follows: (1) where it is plausibly believed that there is a possibility that x (which, for example, might be a person, a product, a process, or a practice) might put at risk either (a) some right (or some condition that is supportive of that right) or (b) the infrastructural conditions that are presupposed by a viable community of rights and (2) if prohibition (or preventive confinement) of x would be a zero deficit to a regime of rights, then should x be prohibited? To which the answer, of course, is that x should be prohibited (or preventively confined)—or, at any rate, it should be prohibited (or preventively confined) unless the regulatory costs of prohibition (or preventive confinement) would have negative effects for a rights regime.

Secondly, it would see some questions as bringing together judgments of precaution and proportion. These, as it were, “procautionary” puzzles necessarily are more complex than one-dimensional questions of precaution, but, within the band of complexity, some procautionary calculations are more complex than others. At the less complex end of the spectrum, the question runs in the following form: (1) where it is plausibly believed that there is a possibility that x (which, for example, might be a person, a product, a process, or a practice) might put at risk either (a) some right (or some condition that is supportive of that right) or (b) the infrastructural conditions that are presupposed by a viable community of rights and (2) if prohibition (or preventive confinement) of x would involve a certain cost to a regime of rights, then should x be prohibited (or preventively confined)? The dilemma here is that prohibition (or preventive conferment) is not costless (relative to the values of a community of rights); on the other hand, depending upon whether the possibility of risk is an actuality of risk, the prohibition (or preventive conferment) might or might not be necessary.

Thirdly, there are the most complex types of precautionary (or precautionary) question where there is uncertainty on both sides of the calculation. The form of the question here is as follows: (1) where it is plausibly believed that there is a possibility that x (which, for example, might be a person, a product, a process, or a practice) might put at risk either (a) some right (or some condition that is supportive of that right) or (b) the infrastructural conditions that are presupposed by a viable community of rights and (2) where it is plausibly believed that prohibition (or preventive confinement) of x might involve a cost to a regime of rights, then should x be prohibited (or preventively confined)? The dilemma here is that, on the one side, prohibition (or preventive confinement) might or might not be costless (relative to the values of a community of rights), whilst, on the other side, depending upon whether or not the possibility of risk is an actuality, the prohibition (or preventive confinement) might or might not be necessary.

This is by no means the end of the matter. In the schematic thinking sketched above, there are a number of potential sources of hazard and risk (persons, products, processes, and practices) relative to recognized rights and the infrastructure of rights. In each case, the uncertainty is about the behaviour, characteristics, or contribution of these sources. However, the community’s uncertainty might not be about such sources and their causal or contributory effects but about the best interpretation of its rights commitments. Developing a coherent jurisprudence around these different expressions of uncertainty is a major task for a community of rights.

Nanoethics and Uncertainty

The uncertainty associated with nanotechnologies might give rise to a variety of ethical puzzles. For the sake of illustration, let me suggest two such puzzles. One puzzle is whether, for precautionary reasons, nanoproducts should not be put into circulation (or, at any rate, whether there should be a moratorium until safety has been assured)35 or, if nanoproducts are permissibly put into circulation, whether liability regimes should assume heightened levels of precautionary and protective thinking. The other puzzle is whether informed consent is meaningful (or workable) where the properties of nanomaterials or processes are unknown. We can discuss each puzzle in turn before gathering together the principal policy questions and (depending upon how these questions are answered) sketching the salient features of the resulting regulatory landscape.

Precaution and Nanoproducts

If there is an outcry when a product such as “Magic Nano,” a product that is nano only in name, causes injury,36 we can expect there to be a considerable panic if and when the alarm is not false. Products will be recalled, product liability lawyers will be hired, and there will be demands for precautionary regulation. If calm is urged on the grounds that we do not really know the extent of the hazard or the scale of the risk, the precautionary principle will be pressed into service, precisely because of the uncertainty surrounding product safety. Indeed, anticipating such a scenario, some might advocate proactively acting on the precautionary principle. Why wait, it might be asked, until after a nanodisaster has happened?

As we have seen, the precautionary principle has been seriously wounded by its critics. Nevertheless, this is not to say that a precautionary approach of some kind should not be adopted, and, in a community of rights, that approach will be sensitive to the nature of the rights that are judged to be at risk. Where, then, does this leave us in the face of the development of potentially hazardous nanoproducts?37

A plausible starting point is the thought that, whilst a community of rights might be willing to forego the benefits of a raft of inessential consumer products (nanocoated tennis balls, golf clubs, and the like), it would be reluctant to forego the benefits that nanotechnologies promise to generate in such areas as health care and environmental improvement.38 For, relative to the needs of agents and relative to their rights, products of the latter kind are a higher priority. This suggests a spectrum of agent needs with, at one pole, those needs that are essential for basic human flourishing and, at the other, those needs that are inessential. Nanoproducts would be assessed relative to this scale. Nanoproducts that might be utilized to provide clean water would lie at the essential end of the spectrum; cosmetics would lie towards the inessential end.

With regard to products that lie at or near the essential end of the spectrum of agent needs, the priority is to encourage research and development together with a responsible approach by producers so that knowledge as to the type, scale, and extent of possible hazards and risks is made known.39 In such a context of responsible product development, there would be two important gains for consumers: First, the risk of dangerous products being put into circulation would be reduced, and second, the scope for a development risks defence would also be reduced (because relevant knowledge would be shared and circulated).

As for products that lie at or near the inessential end of the spectrum of agent needs, members of a community of rights would want to know which kind of rights are at risk. If such products represent a threat to life, regulators might apply a prohibition, or, at the very least, they might discourage circulation by excluding a development risks defence. Where the risk that is believed to be associated with a product is less serious (relative to the priority of rights), regulators might take a more relaxed approach. As we have seen, judgments of precaution cannot be coherently acted on in isolation; comparative judgments of proportionality also need to be made. In sum, the ethics of rights would indicate two sets of considerations: first, the importance of the product relative to the needs of agents, and second, the nature of the supposed risks relative to the hierarchical importance of the rights of agents.

This way of putting the matter, however, might not do full justice to the depth of uncertainty that is associated with the early development of nanotechnologies. Suppose that an honest assessment of the state of scientific and technical knowledge states: (1) there is some evidence that this product, incorporating such and such nanomaterials, might be a hazard to human health and (2) we know so little about the characteristics of these nanomaterials that we cannot rule out the possibility that this product might be hazardous in ways that we have not either considered or contemplated. Ex hypothesi, there is no evidence whatsoever of hazards that have not been considered or contemplated, from which it follows that we cannot express the damage that needs to be written into the (see-saw) precautionary calculation. What we have here is a case of speculative ignorance.40 Precaution (qua judgments as to the acceptability of activities that are suspected, but not yet conclusively proved, to be risky in a particular way) is not geared for this kind of case, and a constructive risk management strategy, whether backed by utilitarian or human rights ethics, surely would focus on putting some resource into providing regulators with a stronger evidence base for their actions41 coupled, perhaps, with the establishment of a contingency fund for the suspected or unknown unknowns.

Informed Consent and Nanomedicine

A number of ethical concerns have been expressed about nanomedicine (and nanohealth care). For example, it has been said that nanosensors and the like might give rise to privacy violations, and there is a worry that there will be a difficulty in drawing clear lines between illness and disease, health, and enhancement.42 Such questions, however, are hardly new: There is, surely, little more that can be said about privacy,43 and although ethicists have not yet spilt quite so much ink by debating the ethics of enhancement, they are rapidly catching up.44

In its Opinion on the Ethical Aspects of Nanomedicine (European Group on Ethics in Science and New Technologies to the European Commission 2007), the European Group on Ethics in Science and New Technologies (the EGE) casts its net widely, identifying the following ethical questions relating to the development of nanomedicine:

How should the dignity of people participating in nanomedicine research trials be respected? How can we protect the fundamental rights of citizens that may be exposed to free particles in the environment? How can we promote responsible use of nanomedicine which protects both human health and the environment? And what are the specific ethics issues, such as justice, solidarity and autonomy, that have to be considered in this scientific domain?45

The ethical backcloth against which the EGE identifies these issues—ranging from the European Charter of Fundamental Rights (the Nice Charter) through to the UNESCO Universal Declaration on Bioethics and Human Rights 2005—is not so different to what one might expect in a community of rights. But how would such rights considerations be applied?

We can focus on the particular question of how we can respect the dignity of participants in nanomedical research trials (or, similarly, patients in clinical settings). The novel point is whether we can design adequate protocols for informed consent where we are operating, as we are with nanotechnology, in a context of extreme uncertainty. As the EGE puts the matter:

The requirement for informed consent is of crucial importance in both medical research and health care. But both the lack of knowledge and the uncertainties that exist [with regard to the biomedical applications of nanotechnology] create problems for the attempts to provide adequate and understandable information and [to] obtain consent….46

To be blunt, the puzzle is how can we inform agents about risks of which we are entirely unaware? For utilitarians, there might be a short answer to this question, namely that we cannot give such information, that we should try to minimize our fields of ignorance and uncertainty, but that, in the meantime, we should not agonize about the ethical implications of our inability to disclose what we do not know. If nanomedicine has harmful effects of which we are unaware, so the utilitarian might reason, some agents will be harmed, but it simply compounds the distress if we advert to the fact that we know that there might be risks that we have not yet identified—let alone that there might be risks that, in Rumsfeldian speak, we do not know about (Wilson (2006)). However, such a robust view will not be accepted by rights ethicists, this being the constituency in which, as we have seen, consent is taken most seriously.

Unfortunately, the hard question posed by the EGE comes at a time when there is a certain amount of soul searching about the principle of informed consent and its application. The problem is that the translation of the principle into practice is so routinized that it rarely results in better informed decision making and invariably tends towards ever longer information sheets. As Onora O’Neill has pointedly remarked, “[g]enuine consent is not a mater of overwhelming patients with information.”47 If the information sheets for nanomedical applications are to become even more extended, how is a rights-respecting community to respond?

Briefly, I suggest that we should start not with consent but with the framework regime of rights. There is an important difference between, on the one hand, the conditions that must be satisfied before a consent is to be treated as adequately informed and, on the other hand, an independent right to be informed.48 Generally, our focus should be on the latter and, this being so, we need to be quite clear about the purpose of the right to be informed, or the right to make an informed choice. Arguably, what matters here is not so much the quantity of information given but the quality of the information. If the right is crafted in the light of this distinction, in the general run of things, clinicians and researchers should be able to discharge their informational responsibilities by presenting the matter not only in a way that is intelligible to most agents but also in a way that adverts to the considerations that most agents would judge to be material. Exceptions might need to be made for agents who present with exceptional informational requirements (for example, who are exceptionally risk-averse), but, in the standard case, agents would know that there was some information that was not disclosed to them and that would not be disclosed unless they asked for it. The purpose of withholding such information would not be to trick patients or prospective participants into making a decision desired by the medical team or the researchers but, quite simply, to assist agents to make intelligent and defensible decisions. Moreover, if agents did not quite trust the standard arrangements, it would be open to them to ask for full and complete information.

If we apply this approach to the question posed by the EGE, the answer perhaps lies in distinguishing between two classes of case, namely (1) those cases where we have sufficient knowledge and experience to indicate to agents that we have a range of risks on the radar; we might not be sure about individual susceptibility to these risks, and the like, but we are reasonably confident that what we have on the radar is the extent of the possible adverse effects, and (2) those cases (for example, phase one clinical trials, where nanomedicines are for the first time applied to humans) where we have such limited knowledge and experience that we cannot be confident that the potential risks that we have on the radar reflect the extent of the possible adverse effects. Those who are doing their level best to discharge their informational obligations can never do more than exercise their good faith judgment as to the class of case they are dealing with. However, if they judge that a case falls within class 2, then the warning concerning the limits of our knowledge and experience needs to be the first, not the last, thing that the prospectively consenting agent is told. Moreover, even if an agent has opted for the standard information package, this should not apply to class 2 cases; in other words, the choice between a standard or a comprehensive information package should be restricted to class 1 cases.

If the right to make an informed choice were to be regulated in this way, would there be a danger that an agent, having consented under the standard arrangements, might then legitimately complain that the consent was deficient as under-informed? We can be sure that, somewhere in the withheld information, there would be an item that the agent might seize on as subjectively material. If this was a risk, defensive practice would be adopted with doctors and researchers insisting on full and complete disclosure in every case. However, regulation would close off such a risk by providing that, where a consenting agent elects to proceed without full disclosure, this implies an assumption of responsibility with the result that the agent is precluded from presenting the consent as under-informed. In other words, in such circumstances, the consent would protect the beneficiaries (whether doctors or researchers) who, in good faith, have acted in reliance on it.

The Principal Policy Questions and the Resulting Regulatory Landscape

Gathering together the strands of the discussion, we can identify a number of policy questions—some large and general, others more particular and at the level of detail—to be addressed by a community of rights. In no case, though, will regulators operate in a principle-free zone; whatever the scale of the policy decision, the aspiration is to put in place a rights-sensitive, integrated, and coherent regulatory regime. We can start with a couple of large policy questions before focussing on more particular issues relating to nanoproducts and nanomedicines and then on a cluster of difficulties concerning regulatory arbitrage, regulatory tourism, and Internet supply.

Two Large Policy Questions

Perhaps the largest and certainly the first policy question is whether regulators should impose a prohibition or a moratorium on the development or use of any kind of nanotechnologies. If a community believed that it fully comprehended the risk profile for nanotechnologies, the question would be whether, all things considered, such a prohibition or moratorium would be appropriate and proportionate in the light of the known positives and negatives. Where, however, the community assumes that it does not fully comprehend the risk profile (where the community concedes that “for all it knows,” nanotechnologies might be hazardous or risky in ways that are beyond the community’s contemplation or comprehension), some might argue for a precautionary prohibition or moratorium simply to cover this “for all we know” contingency. Here, the question is whether, for precautionary reasons, regulators should impose a prohibition or a moratorium for fear that nanotechnologies might be threatening to the interests of rights holders in ways or on a scale that has not even been contemplated. Having reviewed the arguments, regulators must make a choice between (1) the imposition of a regulatory prohibition or a moratorium and (2) some degree of permission or licence for nanotechnologies. If the former choice is made, then the regulatory landscape will be dominated by the prohibition or the moratorium. If, however, the latter choice is made, then there is a second large policy question to be addressed, this being a question about the terms of the permission or licence.

Let us assume that a community of rights, despite taking the “for all we know” risk seriously, rejects prohibition and favours permission. What regulatory provision should be made for any injury, damage, loss, or adverse event that arises under the “for all we know” proviso? In other words, what regulatory provision should be made for harms that are simply beyond the bounds of our contemplation prior to the (uncontemplated) risk eventuating? The basic policy choices here are between (1) accepting a collective responsibility to cover the harm by setting up a contingency fund, or insurance, or the like and (2) rejecting any such responsibility. In a community of rights, compensation for injury to an agent is not to be equated with non-occurrence of the injurious act—compensation is a second best; it does not cure. Nevertheless, a commitment to collective responsibility for uncontemplated injury or harm would surely be a corollary of deciding to permit or license nanotechnologies, especially so where the community is not confident that it fully comprehends the risk profile for the technology. The details of such a contingency fund or insurance scheme would need further consideration. Here, it suffices to assume that a compensatory fund of some sort would be in place to cover uncontemplated harm.

From this point on, we can bracket off any concerns about the “for all we know” proviso. The community has placed its policy bets on this issue. From here on, we are dealing only with those risks and benefits that are reasonably contemplated, and judgments about acceptable risk will be made within these bounds. To those who object that judgments of this kind cannot coherently be made when the full extent of the risks remains unknown, the answer is that the community has already made regulatory provision for the uncontemplated and it must now proceed in the light of what it reasonably believes it does know about nanotechnologies. Accordingly, it is on these terms that we can consider how the more particular policy decisions would be made in relation to nanoproducts and nanomedicine.

Nanoproducts

In relation to nanoproducts, the first policy question is whether such products should be subjected to some ex ante regulatory review, or whether they should be simply allowed to circulate subject to whatever ex post regulatory provision (in contract law, tort law and product liability law) there is. Given what I have said about the community of rights’ view on the relationship between injury and compensation, it is inconceivable that there should be no ex ante regulatory control. To repeat, compensation for injuries caused by dangerous or unsafe products is a second-best; it is far preferable that such injuries are prevented. Accordingly, in a community of rights, there would be a raft of ex ante regulatory controls applied to nanoproducts. Without the requisite regulatory clearance, nanoproducts should not go to market.

The next policy question involves the nature of those ex ante regulatory controls. In particular, would the regulatory judgment be confined to a body of scientific and technical experts or would it also involve lay judgment? In a community of rights, scientific and technical experts hold no special brief to speak for rights holders; so there needs to be an inclusive process. As David Bazelon pointed out some years before technology went nano, although “[s]cientists are uniquely competent to address scientific/factual issues,” and although “science is elitist,” it does not follow that scientists have a special competence in relation to values; indeed, when it comes to value choices, “the opinions of scientists are entitled to no greater weight than those of the rest of us” (Bazelon 1977). Hence, experts should advise on their best guess as to the nature and probability of the apprehended risks but the public should be fully engaged in characterizing which risks are material, which risks are acceptable and, where there are conflicting rights involved, which priorities should be set. Echoing Bazelon’s thinking, Ronald Sandler and W.D. Kay argue (Sandler and Kay 2006):

[S]cience and industry experts have an important role to play….They are well positioned to see what is possible, what is feasible, and what is required to achieve certain economic and technological ends. They thereby play a crucial informational role. But knowledge of what can and cannot be done, and of what is and is not required to do it, is quite different from knowledge of what ought and ought not to be done. What ends should be prioritised, how resources should be allocated in pursuit of those ends, and constraints on how those ends ought to be pursued are ethical and social questions to be addressed in the public sphere, not economic and technological ones to be worked out in boardrooms or laboratories….So while scientists and industry leaders may be “elite” in their knowledge of the science and business of nanotechnology, this status does not imply that they are “elite” with respect to the [social and ethical] issues associated with nanotechnology…. (Sandler and Kay 2006, p. 679)

These sentiments resonate with the thinking in a report prepared for the Washington-based Project on Emerging Nanotechnologies (Davies 2006), according to which the public needs to be involved in two capacities: First, as citizens, members of the public are stakeholders in assessing the larger social and ethical risks associated with nanotechnology, and secondly, as potential consumers of nanotechnology products, members of the public need to be able to make informed choices.

How might the regulators make use of the advice offered by their expert scientific and technical committees in conjunction with the views formed by their citizen or consumer panels? From the experts, regulators would have the best available evidence concerning the hazards and risks presented by nanomaterials and their particular applications, on the basis of which it might be possible to characterize each nanoproduct as high, medium, or low risk.49 From the citizen and consumer panels, regulators would know how the community articulates its needs so that each product could be classified as serving essential or non-essential needs, and regulators would also have information about how the community assesses an acceptable risk. A priori, we would expect the easiest case for regulatory clearance to be that in which a nanoproduct that serves an essential need is classed as low risk, and, conversely, the easiest case for denial of regulatory clearance is that in which a nanoproduct that serves an inessential need is classed as high risk. Cases falling between these two poles would be more difficult. But, this is all highly schematic and, in practice, whilst some decisions would be fairly mechanical (especially as the characteristics of the technology were better understood), others would be more complex. At all events, the outcome of the process would be that each nanoproduct was either granted or denied regulatory clearance.

Where a nanoproduct is granted ex ante regulatory clearance, there remains the question of the terms on which it is put into circulation.50 Let us suppose that the product is classified as low risk and that it serves essential needs. If a wholly uncontemplated risk eventuates, this brings into play the “for all we know” contingency fund. However, if the risk that eventuates is of a kind that is within contemplation, a policy choice needs to be made about whether or how this risk attracts liability or insurance. In a case of this kind, it might be decided that, whilst product liability is inappropriate (the producer, after all, is serving essential needs), there should be some kind of collective insurance. This leaves a great deal of regulatory business that is incomplete. Decisions need to be taken about the allocation of risk in relation to, say, inessential low or medium risk nanoproducts, about the kind (and effect) of warning notices to be placed on consumer products, and so on. However, that is as far as I can take it in this paper.

Nanomedicines

The ex ante regulatory review track for nanomedicines, nanodrugs, and nanomedical devices involves three key stages. First, there is the basic research to be undertaken and the beginning of a nanomedical development; second, the nanomedicine needs to be trialled in humans; and third, it needs to be cleared for clinical application. Although this signifies a more complex regulatory process than that applied to nanoproducts, in a community of rights, we would expect the process to be inclusive in the way that we have already indicated for nanoproducts.

Where researchers have a nanomedicine that they think might be effective and sufficiently safe to try in humans, they would need approval for the trial. Of course, in European law, there is no shortage of regulation covering the conduct of research trials, the development of drugs and medical devices, and so on,51 but, in this paper, I am not concerned to critique existing arrangements.52 The regulatory body that makes the decision about whether or not a trial should proceed should include both medical experts and lay members. Once again, the principal role for the lay members would be to articulate the communities’ priorities relative to the needs of agents. Potentially life-saving nanomedicines would have priority over medicines that respond to less urgent agent needs.

Where clearance is given for a trial, there remain a number of detailed questions about the terms on which the trial is conducted. Even if we assume that, as at present, participation must be voluntary and on an informed consent basis, this leaves open questions about the scope and nature of liability if there are adverse incidents. For example, should the researchers, having been given regulatory clearance for the trial, ever be liable to compensate participants who are injured? Does it matter whether the particular nanomedicine is a high or a low priority? What if participants are paid for their participation (their fee including “danger money”)?

All being well, where a nanomedicine goes through the trial stage successfully, it will be presented for a further round of regulatory clearance, this time for application in clinical practice. Once again, we might expect the decision to be made by a regulatory body that takes full account of both expert medical and lay views. Where regulatory clearance is given, we can expect that these novel nanomedicines will not be administered to patients except on an informed consent basis. I have indicated above how the clinicians’ informational responsibilities might be discharged—bearing in mind that the information given would need to be sensitive to the limits of our knowledge and understanding of the risks involved. However, this still leaves many issues of regulatory detail to be settled—in particular, concerning the scope and nature of liability and the significance, if any, of the application being in a publicly funded or a privately financed health care facility. Again, these are issues that I cannot pursue any further in this paper.

Regulatory Arbitrage, Regulatory Tourism, and the Internet

Where a particular community of rights adopts a regulatory position that is more restrictive than that taken beyond its national boundaries, it might face difficulties in holding the line. This is not a problem that is uniquely created by the prospective development of nanotechnologies. Far from it, in a globalizing world, national regulators have to contend with regulatory tourism and regulatory arbitrage as well as the availability of goods and services on line.

Regulators have a problem with tourism to the extent that consumers, with the necessary resources and determination, travel to places where the goods or services that are restricted at home may be lawfully procured. For example, reproductive and end-of-life services that are not lawfully available in one jurisdiction may be available in another. In the same way, recreational drugs may be procured in Dutch cafes, experimental stem cell treatments in the Far East, and so on. So long as such tourism is a trickle rather than a flood, the local (restrictive) regulatory position is not literally breached and its credibility is not wholly undermined. Nevertheless, over time, regulatory tourism weakens the local position. Moreover, where a state engages self-consciously in regulatory arbitrage, its intention being positively to attract business, the weakening of local restrictions is likely to be much more significant. So, for example, if a state creates a permissive regulatory environment for nanotechnological research and development with a view to attracting businesses that find themselves constrained by their local regulatory environments and if such businesses relocate (or threaten to relocate), then this creates a major problem for local regulators and their restrictive policy.

If anything, regulators face even more serious problems in controlling incoming goods and services. Indeed, with the development of the Internet, the old problem of policing one’s borders seemed to be overshadowed by the new problem of regulating cyberspace. If the seminal debate between the cyberlibertarians and the cyberpaternalists was partly about the legitimacy of regulating the traffic that flowed through cyberspace, it was at least as much about the possibility of controlling that traffic. Famously, David Johnson and David Post (Johnson and Post 1996)53 argued that nation states would have to fundamentally rethink their regulatory approach if they were to exert any control over the goods and services supplied on line and over Internet content more generally. To state the obvious, for Westphalian nation state sovereignty to prosper in the physical world, we need to have clearly marked national boundaries and flags of jurisdictional authority. In the cyberworld, we have no such markers. To be sure, there is a boundary between on-line and off-line environments, but, as Johnson and Post highlighted, once we enter the virtual world, whatever lines we have to cross, they bear no relationship to the lines of the physical world. In cyberspace, there is scope for private governance—arguably, too much scope54—but the writ of local public regulation does not run so well in the virtual world.

A decade or so on from the opening shots, we can scarcely dismiss cyberlibertarianism as a spent force, for the “Internet separatists,” as Joel Reidenberg (2005) calls them, seem to think that the Rule of Law (and national rules of law) simply do not apply to their on-line activities. Hence:

The defenses for hate, lies, drugs, sex, gambling, and stolen music are in essence that technology justifies the denial of personal jurisdiction, the rejection of an assertion of applicable law by a sovereign state, and the denial of the enforcement of decisions….In the face of these claims, legal systems engage in a rather conventional struggle to adapt existing regulatory standards to new technologies and the Internet. Yet, the underlying fight is a profound struggle against the very right of sovereign states to establish rules for online activity (Reidenberg 2005, pp. 1953–1954).

On the other hand, as Reidenberg notes, we should not underrate the resourcefulness of local regulators, including their willingness to become more robust in claiming jurisdiction and choice of law over extra-territorial servers (particularly where there is a technological link with local equipment).
Yet, how precisely are local regulators to enforce their local restrictions against nanoproducts or nanomedicines that are available via the Internet? First, it should be said that the range of options available to local regulators depends upon whether the political culture is authoritarian or liberal. If the culture is authoritarian, the state will control the Internet at all key layers and, in this context, it will be possible for local regulators to establish Chinese walls and filters that enable it to determine the content that is available to users.55 Such, however, is far removed from the culture of a community of rights. In a community of rights, where the culture is more liberal, direct control of this kind is not feasible. Essentially, there seem to be three options. First, regulators can rely in an ad hoc fashion on measures that target relevant regulatees, or their assets, where such persons or their assets are physically within the jurisdiction—in the way, for example, that the French were able to act against Yahoo by targeting Yahoo’s assets in France. Second, they might enter into systematic cooperative arrangements with other local regulators. One such scheme for mutual aid has been devised by Lawrence Lessig (2006). Thus:

The pact would look like this. Each state would promise to enforce on servers within its jurisdiction the regulations of other states for citizens from those other states, in exchange for having its own regulations enforced in other jurisdictions. New York would require that servers within New York keep Minnesotans away from New York gambling servers, in exchange for Minnesota keeping New York citizens away from privacy-exploiting servers. Utah would keep EU citizens away from privacy-exploiting servers, in exchange for Europe keeping Utah citizens away from European gambling sites (Lessig 2006, p. 308).

This is an elegant exercise in reciprocity and, in principle, it should work as well with nanoproducts or nanomedicines as with any other on-line goods or services. The third option is to combat technology with technology. In this vein, Reidenberg remarks:

Technology empowers sovereign states with very potent electronic tools to enforce their policies and decisions even in the absence of a wrongdoer’s physical presence or tangible assets. States can use filters and packet interceptors as well as hacker tools like viruses and worms to enforce decisions and sanction malfeasance (Reidenberg 2005, p. 1963).

Whilst such measures might be effective, there are rather obvious issues of comity and legitimacy in this particular technological turn.

This is not the place to take the discussion further.56 Suffice it to say that it is not enough for a community of rights to make its own policy decisions about nanotechnologies and then to construct a regulatory architecture that serves these decisions. To implement its policies effectively, regulators also need to have a strategy for dealing with the kind of externalities discussed in this sub-section of the paper.

Conclusion

I can draw together the principal conclusions arising from this paper in the following seven short points: First, subject to my reservation concerning a fundamental destabilization of the context for ethical reflection, I remain to be persuaded that any new technology, including nanotechnology, demands a rethink as to the formal matrix of ethical deliberation. The basic shells of ethical thinking feature goals, rights, and duties and our judgments about nanoethics will be shaped by this formal template.

Second, even though the formal matrix might not be disturbed by nanotechnology, each technology invites a distinctive articulation of substantive ethical principles. In response to modern biotechnology, the bioethical triangle with a distinctive dignitarian viewpoint has been elicited, and in response to nanotechnology, it is likely to be the treatment of intense uncertainty that becomes highlighted in our substantive ethical articulations. Judgments of precaution need to be more explicitly connected to judgments of proportion.

Third, nothing that I have said in the first two of these concluding remarks implies a complacent attitude towards nanoethics. Not for one moment do I suppose that we have an unproblematic nanoethical script.

Fourth, in the ideal–typical setting of a community of rights, policy decisions are both rational and principled. In such a community, regulators will make judgments about precaution, proportion, liability, and the like, by reference to their rights commitments. Moreover, judgments about the ex ante and the ex post regulatory architecture would not be taken in isolation from one another. The aspiration would be to put in place a regulatory regime that displays both an overall coherence and a sensitivity to the community’s rights commitments.

Fifth, the first policy question for a community of rights would be to determine its approach to the possibility that its apprehension of the risk profile for nanotechnologies is significantly incomplete. It might decide that a regulatory prohibition or moratorium is the rational response. But if it takes a permissive approach, it will make compensatory provision for the uncontemplated risks (if any) that eventuate once the development and application of nanotechnologies has been licensed.

Sixth, in such a community, at one level, regulation would be designed to encourage a proactive and responsible approach with a view to becoming better informed about nanohazards and nanorisks, and, at another level, it would be geared to encouraging the production of those nanocommodities (nanoproducts and nanomedicines) that serve the essential needs of citizens. Finally, in a community of rights, there would need to be some clarification of the principle of informed consent. Crucially, duties to inform in relation to nanomaterials would need to be shaped by a more sensitive characterisation of uncertainty. In clinical and research settings, if the working assumption is that we simply (and possibly significantly) do not know what we do not know, the extent of our uncertainty should be the headline warning to patients and trial participants.

Footnotes

  1. 1.

    In general, see Brent Blackwelder (2007). According to Wardak and Gorman (2006), there are already more than 200 nanotechnology-based products in the consumer marketplace, and, more recently, the Royal Commission on Environmental Protection (2008) puts the figure at more than 600.

  2. 2.

    See Lin (2007; for the view that US regulatory provisions are inadequate), Giorgia Guerra (2008; for the view that European Commission (EC) regulation does not fit very well with potential nanomedical applications), and, generally, Phelps (2007).

  3. 3.

    Brownsword (2008a). See, too, Brownsword (2008b).

  4. 4.

    Compare Holm (2005).

  5. 5.

    Of the emerging technologies, it is neurotechnology rather than nanotechnology that seems most likely to destabilize our self-understanding as agents; but nanotechnology (particularly in conditions of convergence) might destabilize the division between agent subjects and technological objects.

  6. 6.

    Reno v ACLU 521 US 844 (1997). For regulatory purposes, is the Internet to be viewed as akin to “a library, a telephone, a public park, a local bar, a shopping mall, a broadcast medium, a print medium, a medical clinic, a private living room, [or] a public educational institution[?]”, see Biegel (2001).

  7. 7.

    Compare Dupuy (2007).

  8. 8.

    Compare Brownsword (2009).

  9. 9.

    See Weckert and Moor (2007).

  10. 10.

    See Yusuf (2007).

  11. 11.

    UNESCO Universal Declaration on Bioethics and Human Rights (adopted by acclamation on 19 October 2005 by the 33rd session of the General Conference), Article 1.

  12. 12.

    Compare Spinello (2006). Spinello employs three ethical approaches (utilitarian, contractarian rights-based, and Kantian duty-based), in conjunction with a post-Lessig range of regulatory options, to review four key issues, namely freedom of on-line expression, intellectual property in cyberspace, Internet privacy, and security.

  13. 13.

    See Klang and Murray (2005).

  14. 14.

    Compare, Department of Biotechnology, Ministry of Science and Technology, Government of India, National Biotechnology Development Strategy (2006) at 23: “A precautionary, yet promotional approach should be adopted in employing transgenic R & D activities based on technological feasibility, socio-economic considerations and promotion of trade.”

  15. 15.

    See, further, Brownsword (2007a).

  16. 16.

    For an excellent discussion, see Stewart (2002).

  17. 17.

    Moreover, in societies where the trajectory is pro-technology, the precautionary principle is unlikely to be interpreted in a way that does more than slow down the pace of change: see the insightful analysis by van den Daele (2007).

  18. 18.

    OJ EPO 10/1992, 590.

  19. 19.

    For discussion of this issue in the context of supposedly “dangerous” offenders, see Bottoms and Brownsword (1983) and Bottoms and Brownsword (1982).

  20. 20.

    Compare the interesting discussion in Baylis and Fenton (2007).

  21. 21.

    Compare the analysis in Beyleveld and Brownsword (2001), especially Chapter 6.

  22. 22.

    For cultural conservativism masquerading as risk analysis, see again Wolfgang van den Daele (2007).

  23. 23.

    See Brownsword (2005a).

  24. 24.

    See, e.g., Marchant and Sylvester (2006).

  25. 25.

    For a more specific elaboration, see Pfizer [2002] ECR II-3305, at para 143: “a preventive measure cannot properly be based on a purely hypothetical approach to risk, founded on mere conjecture which has not been scientifically verified.” So, mere conjecture and hypothesis will not suffice. But, a precautionary measure may apply where the risk “has not yet been fully demonstrated” (para 146). The underlying science must be consistent with principles of “excellence, transparency and independence” (para 172).

  26. 26.

    See, further, Beyleveld and Brownsword (2009).

  27. 27.

    According to Weckert and Moor (2007, p. 144), a plausible version of the precautionary principle provides as follows: “If an action A poses a credible threat P of causing some serious harm E, then apply an appropriate remedy R to reduce the possibility of E.” This version still leaves scope for interpretation in relation to the requirements of credibility and seriousness, and the qualifier “appropriate” cues in judgments of proportionality.

  28. 28.

    The best deep defence of the benchmark involves teasing out the implications of viewing ourselves as agents (with the capacity to engage in practical reason). Seminally, see Gewirth (1978) and Gewirth (1996).

  29. 29.

    See, further, Brownsword (2005b, 2006).

  30. 30.

    On will and interest theories of rights, see Hart (1973) and MacCormick (1977).

  31. 31.

    Compare Beyleveld and Brownsword (2006).

  32. 32.

    See, further, Brownsword (2004) and Beyleveld and Brownsword (2007).

  33. 33.

    Compare Beyleveld and Pattinson (2002).

  34. 34.

    See, further, e.g., Brownsword (2007b).

  35. 35.

    Compare Blackwelder (2007).

  36. 36.

    See Farrelly (2007).

  37. 37.

    Compare the helpful discussion in Weckert and Moor (2007), especially at pp. 140–141.

  38. 38.

    Compare Farrelly (2007), especially at pp. 220–222.

  39. 39.

    Compare Mandel (2008).

  40. 40.

    Compare Dupuy (2007).

  41. 41.

    This is a strong theme in the Royal Commission on Environmental Protection (2008).

  42. 42.

    See McHale (2008).

  43. 43.

    But see van den Hoven (2007).

  44. 44.

    See, e.g., Harris (2007); and Sandel (2007).

  45. 45.

    European Group on Ethics in Science and New Technologies to the European Commission (2007) at para 4.1.

  46. 46.

    European Group on Ethics in Science and New Technologies to the European Commission (2007) at para 5.7.

  47. 47.

    O’Neill (2003)—a point that is repeated in Manson and O’Neill (2007).

  48. 48.

    See Brownsword (2008a, Chapter 3).

  49. 49.

    Compare Mandel (2008).

  50. 50.

    Compare Noah (2000).

  51. 51.

    The focal legal provision for the conduct of research trials is Directive 2001/20/EC (the Clinical Trials Directive) but this is part of a larger network of regulation, including Directive 93/42/EEC (on medical devices), Directive 2001/83/EC (on medicinal products for human use), and, most recently, the Advanced Therapies Medicinal Products Regulation (1394/2007/EC).

  52. 52.

    This, of course, should not be taken as implying that I think that such an evaluation of existing regulation (and the associated processes of review and approval) is unimportant. For relevant discussions, see, e.g., van Calster and Bowman (2008) and Dorbeck-Jung (Dorbeck-Jung 2008).

  53. 53.

    But, for a different assessment, see, e.g., Goldsmith (1998).

  54. 54.

    Notably, see Lessig (2001, 2006) and Reidenberg (1998).

  55. 55.

    For a helpful review, see Deibert and Villeneuve (2005).

  56. 56.

    See, further, Brownsword (2008a, Chapter 3).

References

  1. Baylis, F., & Fenton, A. (2007). Chimera research and stem cell therapies for human neurodegenerative disorders. Cambridge Quarterly of Healthcare Ethics, 16, 195.CrossRefGoogle Scholar
  2. Bazelon, D. L. (1977). Coping with technology through the legal process. Cornell Law Review, 62(817), 826–827.Google Scholar
  3. Beyleveld, D., & Brownsword, R. (2001). Human dignity in bioethics and biolaw. Oxford: Oxford University Press.Google Scholar
  4. Beyleveld, D., & Brownsword, R. (2006). Principle, proceduralism and precaution in a community of rights. Ratio Juris, 19, 141.CrossRefGoogle Scholar
  5. Beyleveld, D., & Brownsword, R. (2007). Consent in the law. Oxford: Hart.Google Scholar
  6. Beyleveld, D., & Brownsword, R. (2009). Complex technology, complex calculations: Uses and abuses of precautionary reasoning in law. In M. Duwell & P. Sollie (Eds.), Evaluating new technologies: Methodological problems for the ethical assessment of technological developments (pp. 175–190). New York: Springer.Google Scholar
  7. Beyleveld, D., & Pattinson, S. (2002). Horizontal applicability and direct effect. Law Quarterly Review, 118, 623.Google Scholar
  8. Biegel, S. (2001). Beyond our control? (p. 28). Cambridge, MA: MIT.Google Scholar
  9. Blackwelder, B. (2007). Nanotechnology jumps the gun: Nanoparticles in consumer products. In M. Nigel, S. de Cameron & M. E. Mitchell (Eds.), Nanoscale (p. 71). Hoboken, NJ: Wiley.CrossRefGoogle Scholar
  10. Bottoms, A. E., & Brownsword, R. (1982). The dangerousness debate after the Floud report. British Journal of Criminology, 22, 229–254.Google Scholar
  11. Bottoms, A. E., & Brownsword, R. (1983). Dangerousness and rights. In J. Hinton (Ed.), Dangerousness: Problems of assessment and prediction (pp. 9–22). London: Allen and Unwin.Google Scholar
  12. Brownsword, R. (2004). The cult of consent: Fixation and fallacy. King’s College Law Journal, 15, 223.Google Scholar
  13. Brownsword, R. (2005a). Biotechnology and rights: Where are we coming from and where are we going? In M. Klang & A. Murray (Eds.), Human rights in the digital age (p. 219). London: Cavendish.Google Scholar
  14. Brownsword, R. (2005b). Making people better and making better people: Bioethics and the regulation of stem cell research. Journal of Academic Legal Studies, 1, 3.Google Scholar
  15. Brownsword, R. (2006). Cloning, zoning and the harm principle. In S. A. M. McLean (Ed.), First do no harm (p. 527). Aldershot: Ashgate.Google Scholar
  16. Brownsword, R. (2007a). Ethical pluralism and the regulation of modern biotechnology. In F. Francioni (Ed.), The impact of biotechnologies on human rights (p. 45). Oxford: Hart.Google Scholar
  17. Brownsword, R. (2007b). The ancillary care responsibilities of researchers: Reasonable but not great expectations. Journal of Law, Medicine and Ethics, 35, 679–691.Google Scholar
  18. Brownsword, R. (2008a). Rights, regulation and the technological revolution. Oxford: Oxford University Press.CrossRefGoogle Scholar
  19. Brownsword, R. (2008b). Regulating nanomedicine—the smallest of our concerns? Nanoethics, 2, 73–86.CrossRefGoogle Scholar
  20. Brownsword, R. (2009). Regulating nanotechnologies: A matter of some uncertainty. Politeia (in press)Google Scholar
  21. Davies, J. C. (2006). Managing the effects of nanotechnology. Washington: Woodrow Wilson International Center for Scholars. Available at http://www.wilsoncenter.org/events/docs/Effectsnanotechfinal.pdf (last visited August 30, 2007).Google Scholar
  22. Deibert, R. J., & Villeneuve, N. (2005). Firewalls and power: An overview of global state censorship of the Internet. M. Klang & A. Murray (Eds.), Human rights in the digital age (p. 111). London: Cavendish.Google Scholar
  23. Dorbeck-Jung, B. (2008). How can hybrid nanomedical products regulation cope with wicked governability problems? Presented at the TILT “Tilting Perspectives on Regulating Technologies” Conference, University of Tilburg, December 10 and 11, 2008 (on file with author).Google Scholar
  24. Dupuy, J.-P. (2007). Complexity and uncertainty: A prudential approach to nanotechnology. In F. Allhoff, P. Lin, J. Moor & J. Weckert (Eds.), Nanoethics, Ch. 9 (p. 119). Hoboken, NJ: Wiley.Google Scholar
  25. European Group on Ethics in Science and New Technologies to the European Commission (2007). Opinion on the ethical aspects of nanomedicine (Opinion No 21).Google Scholar
  26. Farrelly, C. (2007). Deliberative democracy and nanotechnology. In F. Allhoff, P. Lin, J. Moor & J. Weckert (Eds.), Nanoethics (pp. 215–216). Hoboken, NJ: Wiley.Google Scholar
  27. Gewirth, A. (1978). Reason and morality. Chicago: University of Chicago Press.Google Scholar
  28. Gewirth, A. (1996). Community of rights. Chicago: University of Chicago Press.Google Scholar
  29. Goldsmith, J. (1998). Against cyberanarchy. University of Chicago Law Review, 65, 1199.CrossRefGoogle Scholar
  30. Guerra, G. (2008). European regulatory issues in nanomedicine. Nanoethics, 2, 87.CrossRefGoogle Scholar
  31. Harris, J. (2007). Enhancing evolution. Princeton: Princeton University Press.Google Scholar
  32. Hart, H. L. A. (1973). Bentham on legal rights. In A. W. B. Simpson (Ed.), Oxford essays in jurisprudence (second series) (p. 171). Oxford: Clarendon.Google Scholar
  33. Holm, S. (2005). Does nanotechnology require a new ‘nanoethics’?. Wales: CCELS.Google Scholar
  34. Johnson, D. R., & Post, D. (1996). Law and borders—the rise of law in cyberspace. Stanford Law Review, 48, 1367.CrossRefGoogle Scholar
  35. Klang, M. & Murray, A. (Eds.) (2005). Human rights in the digital age. London: Cavendish.Google Scholar
  36. Lessig, L. (2001). The future of ideas. New York: Vintage Books.Google Scholar
  37. Lessig, L. (2006). Code Version 2.0. New York: Basic Books.Google Scholar
  38. Lin, A. C. (2007). Size matters: Regulating nanotechnology. Harvard Environmental Law Review, 31, 349. at 361–374.Google Scholar
  39. MacCormick, D. N. (1977). Rights in legislation. In P. M. S. Hacker & J. Raz (Eds.), Law, morality, and society (p. 189). Oxford: Clarendon.Google Scholar
  40. Mandel, G. N. (2008). Nanotechnology governance. Alabama Law Review, 59, 1.Google Scholar
  41. Manson, N. C., & O’Neill, O. (2007). Rethinking informed consent in bioethics. Cambridge: Cambridge University Press.Google Scholar
  42. Marchant, G. E., & Sylvester, D. J. (2006). Transnational models for regulation of nanotechnology. Journal of Law, Medicine and Ethics, 34, 714–725.CrossRefGoogle Scholar
  43. McHale, J. V. (2008). Nanomedicine—small particles, big issues: A new regulatory dawn for health care law and bioethics? In M. Freeman (Ed.), Law and bioethics (p. 376). Oxford: Oxford University Press.Google Scholar
  44. Moor, J. H. (2008). Why we need better ethics for emerging technologies. In J. van den Hoven & J. Weckert (Eds.), Information technology and moral philosophy (pp. 26–39). Cambridge: Cambridge University Press.Google Scholar
  45. NCB. (1999). Genetically modified crops: The ethical and social issues (p. 162). London: Nuffield Council on Bioethics.Google Scholar
  46. Noah, L. (2000). Rewarding regulatory compliance: The pursuit of symmetry in products liability. Georgetown Law Journal, 88, 2147.Google Scholar
  47. O’Neill, O. (2003). Some limits of informed consent. Journal of Medical Ethics, 29, 6.CrossRefGoogle Scholar
  48. Phelps, T. A. (2007). The European approach to nanoregulation. In M. Nigel, S. de Cameron & M. E. Mitchell (Eds.), Nanoscale (p. 189). Hoboken, NJ: Wiley.CrossRefGoogle Scholar
  49. RCEP. (2008). Novel materials in the environment: The case of nanotechnology. London: Royal Commission on Environmental Protection.Google Scholar
  50. Reidenberg, J. R. (1998). Lex Informatica: The formulation of information policy rules through technology. Texas Law Review, 76, 553.Google Scholar
  51. Reidenberg, J. R. (2005). Technology and Internet jurisdiction. University of Pennsylvania Law Review, 153, 1951.CrossRefGoogle Scholar
  52. Sandel, M. (2007). The case against perfection. Cambridge, MA: Harvard University Press.Google Scholar
  53. Sandler, R., & Kay, W. D. (2006). The national nanotechnology initiative and the social good. Journal of Law Medicine, and Ethics, 34, 675.CrossRefGoogle Scholar
  54. Spinello, R. A. (2006). Cyberethics: Morality and law in cyberspace (3rd ed.). Sudbury, MA: Jones and Bartlett.Google Scholar
  55. Stewart, R. B. (2002). Environmental regulatory decision making under uncertainty. In T. Swanson (Ed.), An introduction to the law and economics of environmental policy: Issues in institutional design (p. 71). Amsterdam: Elsevier.CrossRefGoogle Scholar
  56. Sunstein, C. (2005). Laws of fear. Cambridge: Cambridge University Press.Google Scholar
  57. ten Have, H. (2006). UNESCO and Ethics of Science and Technology. In UNESCO (Ed.), Ethics of science and technology: Explorations of the frontiers of science and ethics, 5–16, at 6. Paris: UNESCO.Google Scholar
  58. van Calster, G., & Bowman, D. (2008). Sufficient or deficient? A review of the adequacy of current EU legislative instruments for regulating nanotechnologies across three industry sectors. Presented at the TILT “Tilting Perspectives on Regulating Technologies” conference, University of Tilburg, December 10 and 11, 2008 (on file with author).Google Scholar
  59. van den Daele, W. (2007). Legal framework and political strategy in dealing with the risks of new technology: The two faces of the precautionary principle. In H. Somsen (Ed.), The regulatory challenge of biotechnology (p. 118). Cheltenham: Elgar.Google Scholar
  60. van den Hoven, J. (2007). Nanotechnology and privacy: Instructive case of RFID. In F. Allhoff, P. Lin, J. Moor & J. Weckert (Eds.), Nanoethics (p. 253). Hoboken, NJ: Wiley.Google Scholar
  61. Wardak, A., & Gorman, M. E. (2006). Using trading zones and life cycle analysis to understand nanotechnology regulation. Journal of Law, Medicine and Ethics, 34, 695.CrossRefGoogle Scholar
  62. Weckert, J., & Moor, J. (2007). The precautionary principle in nanotechnology. In F. Allhoff, P. Lin, J. Moor & J. Weckert (Eds.), Nanoethics, Ch. 10. Hoboken, NJ: Wiley.Google Scholar
  63. Wilson, R. F. (2006). Nanotechnology: The challenge of regulating known unknowns. Journal of Law, Medicine and Ethics, 34, 704–713.CrossRefGoogle Scholar
  64. Yusuf, A. A. (2007). UNESCO standard-setting activities on bioethics: Speak softly and carry a big stick. In F. Francioni (Ed.), Biotechnologies and international human rights (p. 85). Oxford: Hart.Google Scholar

Copyright information

© Springer Science+Business Media, LLC. 2009

Authors and Affiliations

  1. 1.King’s College LondonLondonUK

Personalised recommendations