Design for the Value of Regulation

  • Karen YeungEmail author
Living reference work entry


Design has long been employed for regulatory purposes: by ancient civilisations (such as the ancient Egyptian practice of filling in of burial shafts to discourage looting) through to contemporary communities (such as the use by digital media providers of ‘digital rights management’ technology to prevent the unauthorised copying of digital data). But identifying what counts as a ‘regulatory’ purpose, however, is not entirely straightforward, largely due to the notorious lack of clarity concerning the meaning of the term ‘regulation’. Within regulatory studies literature, the use of design for regulatory purposes has not been the subject of extensive and comprehensive analysis, although particular kinds of design technologies have been the focus of considerable scholarly attention. Nevertheless, two important themes can be identified within regulatory scholarship that may be of considerable assistance in interrogating contemporary debates concerning design for regulation: first, analysis of the tools or instruments that may be employed to implement regulatory policy goals, and secondly, debates concerning the legitimacy of regulation in particular contexts, or the legitimacy of particular forms or facets of the regulatory enterprise. Both these themes will be explored in this paper through a discussion of the challenges associated with the effectiveness of design-based approaches to regulation and in the course of examining some of the controversies that have surrounded its use, with a particular focus on the implications of design for various dimensions of responsibility.

In so doing, I will make three arguments. First, I will argue that design can be usefully understood as an instrument for implementing regulatory goals. Secondly, I will suggest that a regulatory perspective provides an illuminating lens for critically examining the intentional use of design to promote specific social outcomes by showing how such a perspective casts considerable light on their implications for political, moral and professional accountability and responsibility. Thirdly, I will suggest that, because design can be employed for regulatory purposes (particularly in the case of harm-mitigation technologies) without any need for external behavioural change on the part of human actors, Julia Black’s definition of regulation as ‘a process involving the sustained and focused attempt to alter the behaviour of others according to defined standards or purposes with the intention of producing a broadly defined outcome or outcomes’ should be refined to enable all design-based instruments and techniques to fall within the sphere of regulatory inquiry, rather than being confined only to those design-based approaches that intentionally seek to alter the behaviour of others.


Regulatory instruments Tools of government Accountability Responsibility Regulation 


Design has long been employed for regulatory purposes: by ancient civilizations (such as the design of Egyptian pyramids blocking burial shafts in order to discourage looters) through to contemporary communities (such as the use by digital media providers of “digital rights management” technology to prevent the unauthorized copying of digital data).1 Identifying what counts as a “regulatory” purpose, however, is not entirely straightforward, largely due to the notorious lack of clarity concerning the meaning of the term “regulation .” Suggested definitions range from narrow understandings of regulation as the promulgation of legal rules by the state and enforced by a public agency to extremely wide-ranging definitions which regard regulation as including all social mechanisms which control or influence behavior from whatever source, whether intentional or not.2 Nonetheless, many scholars have increasingly adopted the definition of regulation proposed by leading regulatory theorist Julia Black as “a process involving the sustained and focused attempt to alter the behaviour of others according to defined standards or purposes with the intention of producing a broadly defined outcome or outcomes.”3 This definition captures the essential quality of regulation as systematic control and avoids a state-centric approach. Hence, it encompasses attempts by non-state institutions to shape social outcomes for defined purposes but is not so broad as to embrace the entire field of social science, thereby rendering regulation a relatively meaningless category.4 At the same time, defining regulation in terms of intentional action aimed at affecting others provides the trigger for a plethora of concerns about its legitimacy, and it is this focus on intentionality which distinguishes regulatory scholarship from that of Science and Technology Studies (STS) scholarship which has long identified the ways in which artifacts can have social and political effects.5

Given the importance and ubiquity of regulation as a permanent feature of the governance of contemporary democratic economies,6 it is hardly surprising that the field of “regulation” (or “regulatory governance” and its broader counterpart “governance”) has become an established focus of scholarly analysis, drawing from a wide range of disciplinary orientations, including law, economics, political science, criminology, sociology, organizational theory, management studies, and other related social sciences.7 Some scholars usefully portray and analyze regulation as a cybernetic process involving three core components that form the basis of any control system – i.e., ways of gathering information (“information-gathering”); ways of setting standards, goals, or targets (“standard-setting”); and ways of changing behavior to meet the standards or targets (“behavior modification”).8 Although design or technology can be employed at both the information-gathering (e.g., the use of CCTV cameras to monitor behavior) and behavior modification (e.g., offering candy to children to encourage them to act in desired ways) phases of the regulatory process, design-based regulation operates by preventing or inhibiting conduct or social outcomes deemed undesirable. It is the embedding of standards into design at the standard-setting stage in order to foster social outcomes deemed desirable (such as the incorporation of seat belts into motor vehicles to reduce the risk of injury to vehicle occupants arising from accidents and collisions) that distinguishes design-based regulation from the use of technology to facilitate regulatory purposes and processes more generally and which forms the focus of this paper.

Within regulatory studies literature, the use of design for regulatory purposes has not been the subject of extensive and comprehensive analysis, although particular kinds of design technologies have been the focus of considerable scholarly attention.9 Nevertheless, two important themes can be identified within regulatory scholarship that may be of considerable assistance in interrogating contemporary debates concerning design for regulation: first, analysis of the tools or instruments that may be employed to implement regulatory policy goals and, secondly, debates concerning the legitimacy of regulation in particular contexts or the legitimacy of particular forms or facets of the regulatory enterprise. Both these themes will be explored in this paper through a discussion of the challenges associated with the effectiveness of design-based approaches to regulation and in the course of examining some of the controversies that have surrounded its use, with a particular focus on the implications of design for various dimensions of responsibility. In so doing, I will make three arguments. First, I will argue that design can be usefully understood as an instrument for implementing regulatory goals. Secondly, I will suggest that a regulatory perspective provides an illuminating lens for critically examining the intentional use of design to promote specific social outcomes by showing how such a perspective casts considerable light on their implications for political, moral, and professional accountability and responsibility. Thirdly, I will suggest that, because design can be employed for regulatory purposes (particularly in the case of harm mitigation technologies) without any need for external behavioral change on the part of human actors, Black’s definition of regulation should be refined to bring all design-based instruments and techniques within the sphere of regulatory inquiry, rather than being confined only to those design-based approaches that intentionally seek to alter the behavior of others.

Understanding Design as a Regulatory Instrument

A well-established strand of regulatory literature is concerned with understanding the various techniques or instruments through which attempts might be made to promote social policy goals, primarily through the well-known policy instruments of command, competition, communication, and consensus – all of which seek to alter the external conditions that influence an individual’s decision to act.10 Consider the following strategies that a state might adopt in seeking to reduce obesity in the developed world, which is regarded by some as an urgent social problem.11 It could enact laws prohibiting the manufacture and sale of any food or beverage that exceeds a specified fat or sugar level (“command”)12; impose a tax on high-fat and high-sugar food products (“competition”)13; undertake public education campaigns to encourage healthy eating and regular exercise14 or attach obesity warning labels to high-fat and high-sugar foods (“communication”)15; or offer specified privileges or benefits to high-risk individuals who agree to participate in controlled diet and exercise programs (“consent”).16 But in addition to all or any of the above strategies, a range of design-based (sometimes referred to as “code”-based or “architectural”) approaches might be adopted, some of which are discussed below by reference to the subject in which the design is embedded (the “design subject”).17

Design Subjects

It can be helpful to classify design-based approaches to regulation by reference to design subject. These categories are not watertight, and many typically overlap so that often any given instrument might be placed in more than one category. Design instruments might also be readily combined.

Designing Places and Spaces

When we think about design or architecture as a means for shaping behavior, we typically think of the ways in which places, spaces, and the external environment more generally may be designed to encourage certain behaviors while discouraging others. The crime prevention through environmental design (CPTED) approach to urban planning and design begins with the fundamental (and unsurprising) premise that our behavior is directly influenced by the environment we inhabit.18 Hence, speed bumps can be installed in roads to prompt drivers to slow their speed, city centers can be pedestrianized to encourage greater physical activity, dedicated cycle lanes can be created to encourage people to cycle thereby encouraging better health and decreasing road congestion and air pollution from motor vehicles, and buildings can be designed with windows overlooking the street in order to increase the visibility of those passing along the street, discouraging crime and increasing the sense of security for street users and residents.

Designing Products and Processes

Design may also be embedded in products or domestic and/or industrial processes in order to alter user behavior or their social impact. Hence, cone-shaped paper cups provided at water coolers discourage users from leaving their empty cups lying around because they cannot stand up unsupported, automatic cutoff mechanisms can be installed in lawnmowers to prevent the motor from running unless pressure is applied to the switch to prevent the lawnmower functioning unintentionally, and airbags can be fitted into motor vehicles to inflate on impact with another object in order to reduce the impact of the collision on vehicle occupants.19

Designing Biological Organisms

The examples referred to above involve designing artifacts and environments that we encounter in our daily lives. But design-based approaches can also be extended to the manipulation of biological organisms, from simple bacteria through to highly sophisticated life-forms, including plants, animals, and, of course, human beings. So, for example, in seeking to reduce obesity, artificial sweeteners (such as aspartame or saccharin) instead of sugar might be used in processed foods, in order to reduce their calorific content; overweight individuals could be offered bariatric surgery in order to suppress their appetite and hence discourage food consumption or anti-obesity medications (such as orlistat) might be provided to overweight individuals and others deemed to be at high risk of obesity.20

Plants: While crops bred for food production might be genetically modified to reduce the risks of obesity for developed world populations have not, to my knowledge, become a reality, the fortification of foods in order to enhance their nutritional value has a long pedigree. For example, niacin has been added to bread in the USA since the late 1930s, which is credited with substantially reducing the incidence of pellagra (a vitamin deficiency disease which manifests in symptoms including skin lesions, diarrhea, hair loss, edema, and emotional and psychosensory disturbance and, over a period of years, is ultimately fatal).21 Plants can also be designed for a range of nonfood applications. For example, “pharming” involves genetically modifying plants and animals to produce substances which may be used as pharmaceuticals generating what advocates claim “an unprecedented opportunity to manufacture affordable modern medicines and make them available on a global scale.”22

Designing animals: Genetic engineering for food production is on the cusp of extending beyond the bioengineering of plants to include more sophisticated life-forms including genetically modified fish (notably, salmon) designed for accelerated growth.23 Several potential applications are also under investigation, including the introduction of genes to alter meat and milk composition to produce either leaner meat or enhanced antimicrobial properties of milk for newborn animals.24 Biological engineering also offers considerable potential for reducing the prevalence and spread of infectious diseases. For example, an Oxford-based firm (Oxitec) has developed a genetically modified mosquito which it hopes will significantly reduce the spread of mosquito-borne disease such as dengue fever. Oxitec claims that these mosquitos have already been released for testing in Brazil, Malaysia, and the Cayman Islands, with test results indicating that mosquito numbers can be greatly reduced in a few months.25 Similarly, genetically modified (transgenic) chickens that do not transmit avian influenza virus to other chickens with which they are in contact have been developed, thereby offering the prospect of employing this technique to stop bird flu outbreaks spreading within poultry flocks and thereby reducing risks of bird flu epidemics leading to new flu virus epidemics in the human population.26

Designing humans: Humans have a long history of seeking to interfere with their own biological processes and constitutions for a variety of social purposes. While the treatment of disease or its symptoms is clearly the primary motive for such interventions, there is no shortage of examples where technologies have been employed to alter human physiological function for various nonmedical purposes. Cosmetic surgery is perhaps the most well-known form of nontherapeutic surgical intervention, common in some developed economies through which individuals seek to “enhance” their physical appearance, including breast augmentation surgery, liposuction to remove fatty tissue, and skin tightening to reduce the appearance of wrinkles. Psychopharmacological approaches are also widely used to alter and lift mood and enhance mental cognition, particularly by students.27 Human bioengineering has also made significant advances: preimplantation genetic testing and diagnosis could potentially be used as the basis for selecting embryos which display predispositions towards specific behavioral traits, and gene therapy might potentially be used to alter behavior through the repair or replacement of genes or the placement of a working gene alongside another faulty gene.28 Advances in mechanical and digital engineering technologies and techniques have enabled the development of bionic technologies through which organs or other body parts can be replaced by mechanical versions, either mimicking the original function very closely or even surpassing it. For example, the cochlear implant is already widely used, and the rapid development of nanotechnology opens up the possibility for using extraordinarily powerful yet exceptionally small computer chips to enhance organ functions, including certain kinds of brain function.29

Design Modalities

Design-based interventions can also be classified by reference to the mechanism through which they are intended to work. Consider again design-based mechanisms aimed at preventing and reducing obesity. First, the aim might be to change individual behavior by discouraging the purchasing and consumption of unhealthy foods and encouraging individuals to be more physically active. Thus product packaging could be designed to include ingredient lists and warning labels for foods with a high fat or salt content, and the installation of cycle lanes and pedestrianized city centers might seek to encourage greater physical activity. Product packaging can be understood as a form of “choice architecture,” referring to the layout and social context in which individuals are provided with choices concerning their behavior that can be deliberately designed to encourage individuals to prefer some choices over others. One particular form of choice architecture that has attracted widespread publicity is the so-called “nudge” technique advocated by Thaler and Sunstein.30 They define a nudge as “an aspect of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives.”31 An oft-cited example is the image of a fly etched into urinals at Schiphol Airport that is designed to “improve the aim” because users subconsciously tend to aim at the fly etching, reducing the risk of spillage and hence helping to maintain the cleanliness of the facilities. The effectiveness of nudge techniques is claimed to rest on laboratory findings in experimental psychology which demonstrate that individuals systematically fail to make rational decisions, resorting instead to intellectual shortcuts and other decision-making heuristics which often lead to suboptimal decisions. The idea underpinning nudge strategies is that these “cognitive defects” can be harnessed through the shaping of choice architecture in order to encourage behaviors and social outcomes deemed desirable by the architect. Default rules and standards are considered to be particularly effective strategies for shaping behavior, seeking to harness the human tendency to “do nothing” and opt for the status quo.32 For example, in the UK, Prime Minister David Cameron recently announced a policy initiative aimed at reducing the risks of children’s access to pornography by securing the agreement of the six major Internet service providers to activate Internet filters against selected categories of content (not just pornography) unless the account holder (who must be over 18 years of age) actively opts to change the default setting to unfiltered Internet provision.33

Alternatively, design-based approaches may operate primarily by seeking to prevent or reduce the probability of the occurrence of the undesired outcome; hence, it might in future be possible to use preimplantation genetic diagnosis and selection to exclude embryos that have genetic markers for low metabolism thereby significantly reducing the risk of obesity faced by the resulting individual.34 Finally, design-based approaches might seek to mitigate the harm generated by the relevant activity. Hence, the use of low calorie sweeteners as an alternative to sugar in manufactured food products enables consumers to continue consuming sweet-tasting carbonated drinks, without the high sugar content of sugar-sweetened variety. Some anti-obesity drugs work by reducing intestinal fat absorption by blocking fat breakdown and thereby preventing fat absorption, while others increase the body’s metabolism, thus theoretically generating weight reduction without the need for the individual to change his or her dietary habits.35

Each of these “modalities of design” (as I have termed them) employs different mechanical logics in attempting to elicit the intended regulatory outcome. Although design-based approaches which seek to alter individual behavior, or which seek to prevent undesired social outcomes, fit comfortably within Black’s definition of regulation, those which rely on harm mitigation do not because they need not generate any change to individuals’ behavior. It might nevertheless be possible to interpret Black’s definition in a way that would include such approaches. In particular, the relevant behavioral change which regulation seeks to elicit could extend beyond changes in the external behavior of individuals to include changes to the behavior and operation of an individual’s internal physiological functioning (such as diet pills which increase metabolism) or the behavior of material objects in relation to each other (such as the impact of moving objects on shatterproof glass) or between living organisms and material objects (such as a cycle helmet’s alteration of the impact of collision damage to the cyclist’s head). But a more intellectually honest, and hence in my view preferable, approach would involve refining Black’s original definition by removing the reference to behavioral change. Regulation would then be defined as “sustained and focused attempts intended process to produce a broadly defined outcome or outcomes directed at a sphere of social activity according to defined standards or purposes that affect others.” This refined definition would manifest the three benefits identified by Black in support of her original definition by first, allowing inclusion of the purposive activities undertaken by non-state actors to shape social outcomes; secondly, avoiding a definition that is so extraordinarily broad that it essentially encompasses the whole of social science scholarship; and thirdly, it gives rise to the kinds of normative concerns about regulation and its legitimacy that have arisen in conjunction with the use of established techniques by those in authority to facilitate the achievement of regulatory goals by focusing on the intentional use of authority to affect others.36 In this respect, it is worth emphasizing that the use of design-based techniques for self-regarding purposes by an individual (where, say, an overweight individual decides to embark on a course of diet pills) or by one individual for use by another (such as a doctor prescribing a course of diet pills to an individual patient in a clinical setting) does not amount to attempts to regulate because they aim to affect only one identifiable individual, rather than a group of individuals or organizations. However, if such measures were employed by, say, a public health agency in programmatic form (say by developing and implementing a nationwide program providing for the free supply and distribution of anti-obesity drugs to any person who met the criteria for eligibility), then this program would constitute a form of regulation. Defining regulation in terms of the targeting of groups or populations, rather than isolated individuals, is important because it is the exercise of authority over groups that lies at the foundation of concerns about regulatory legitimacy.37

Effectiveness, Rules, and Design

One of the most important issues raised within regulatory scholarship concerns the effectiveness of regulatory programs in achieving their intended outcome.38 This section discusses the quest for regulatory effectiveness through design-based approaches by drawing on insights arising from studies of the use of rules in their traditional linguistic form. In particular, a core challenge faced by regulators lies in seeking to devise appropriate standards or rules that will provide clear and useful guidance to those they seek to regulate.39 A rich and well-developed literature concerning the challenges associated with rules as guides for behavior demonstrates that traditional regulation in the form of a rule prohibiting specified activities backed by some kind of sanction for noncompliance (typically referred to as “command and control” regulation) can never be perfectly effective due to the inherent properties of rules.40 First, rules are generalized abstractions that group together particular instances or attributes of an object or occurrence to build up a definition or category that forms the operative basis of the rule. Because these generalizations are inevitably simplifications of complex events, objects, or behaviors and because they are selective, sometimes properties will be included in the rule that are sometimes irrelevant, and some relevant properties will be left out.41 Secondly, it is impossible to devise rules that cohere perfectly with their purpose; they will always be over-inclusive (catching situations that are irrelevant) or under-inclusive (failing to catch situations that ought to be included in order to secure the desired purpose). Even if there is a perfect causal match between the event and harm or regulatory goal, future events can develop in ways that the rule-maker has not, or could not have anticipated, so that the rule ceases to be perfectly matched to its goal. Thirdly, in seeking to provide guidance to those subject to the rule, clarity and certainty in their content and application will be of considerable importance. Yet the clarity of a rule is not solely a product of the linguistic text. It is also dependent upon shared understandings among those applying the rule (regulators, regulatees, institutions responsible for resolving disputes about the application of those rules). In other words, rules will invariably have what the English jurist and legal philosopher H.L.A Hart referred to as an “open texture,” recognizing that although there will be clear cases that fall inside or outside the scope of a given rule, there will inevitably be a “penumbra of uncertainty” concerning its application to particular cases.42

At first blush, the use of design-based regulatory approaches may offer the promise of avoiding many of the inherent limitations of linguistic rules that are in large part a product of the indeterminacy of language. Yet a moment’s reflection will soon reveal why design cannot deliver on this apparent promise. First, all design-based regulatory approaches rely on the embedding of standards into the fabric of the design target. Although the use of standards in linguistic form might be avoided through design-based approaches, the need for standards (and hence standard-setting) is not. Secondly, the problems of under- and over-inclusiveness remain whether or not rules take the form of linguistic constructs or features of material objects, biological organisms, or the built environment. So, for example, public outdoor seating is increasingly designed to make it impossible or uncomfortable for users to lie horizontally to discourage people from sleeping rough and thereby help promote equal access and enjoyment of public parks and amenities. Hence, a park’s authority might install individual outdoor chairs or stools in public parks, rather than traditional bench-style seating. Although the former kind of seating will successfully discourage individuals from lying down, there may well be situations when the parks authority would have been willing to tolerate, or even encourage, an individual to lie on the bench in a particular instance, such as the injured jogger who sprains an ankle and seeks respite from her injury or a love-struck couple wishing to cozy up to each other to savor the park in full bloom, all of which might be regarded as activities that the parks authority, as regulator, would not wish to prevent.

Furthermore, standard-setting assumes even greater importance when design-based approaches to regulation are contemplated, particularly given the need to incorporate some kind of default standard into the design subject. The binary logic of technical design standards is not subject to the uncertainties arising from the inherent indeterminacy of language that plagues the use of linguistic rules. Nevertheless, some kind of default rule is needed to avoid operational failure or suspension in the event of unforeseen circumstances. For example, imagine that commercial passenger aircraft could be fitted with digital signal blocking devices to prevent passengers from using their mobile phones and other digital devices during aircraft takeoff and landing, in order to ensure that the aircraft’s communication systems are not interfered with during these crucial flight periods. Provision would need to be made for any unrecognized signal to be dealt with as either “permissible” (thereby allowing the signal to continue transmission) or as a “violation” (thereby automatically blocking transmission). Such a default standard avoids the need for human interpretation, thereby ensuring that a regulatory response will obtain for every situation. Yet it cannot ensure that the response will be aligned with the regulator’s underlying policy objectives in each and every case. If the default device is programmed to block any unrecognized signals, this might generate a minor inconvenience to those who find that their portable entertainment systems will not operate, but the consequences would be considerably more serious for a passenger suffering from Parkinson’s disease who relies upon deep brain stimulators for the treatment of his neuropathic pain and tremor control.43

Unlike linguistic rules, design-based instruments can be self-executing so that once the standard embedded within the design object has been reached, the response is automatically administered, thereby forcing a particular action or set of actions to occur (hence, I refer to them as “action-forcing” designs).44 For example, digital locks automatically prevent the unauthorized copying of “locked” digital data – there is no need for an administrator or other official to administer the power of exclusion once the digital lock is in place. By contrast, the violation of linguistic rules (such as a “no parking” sign) cannot be sanctioned unless and until compliance with the rule is actively monitored and enforced. Not only does this require human personnel to undertake monitoring and enforcement action against suspected noncompliance but it also requires – at least in democratic societies – a set of enforcement institutions to oversee and administer the lawful and proper application of sanctions. Linguistic rules require interpretation, enforcement, and sanction through human interaction, in which a discrete set of factual circumstances must be interpreted and applied by human agents and an appropriate response identified and administered.

Because rule enforcement is a resource-intensive activity, many legal violations that might otherwise have been proved to the standards required by a court of law might nevertheless go unpunished, particularly those of a fairly minor nature including trivial road traffic violations. In this respect, design-based instruments that avoid the need for human enforcement (which are increasingly referred to as “autonomous technologies”) appear to offer a considerable advantage over their more traditional rule-based counterparts, obviating the need for human and institutional enforcement resources while offering consistent and immediate application. Yet socio-legal scholars have amply demonstrated that the sensitive and judicious exercise of discretion by enforcement officials serves a vital role, enabling regulatory rules to be applied in a manner that conforms with their underlying “spirit” or policy objective, rather than insisting on strict compliance where this is judged to be counterproductive.45 Hence, a parking inspector may exercise her discretion not to issue an infringement notice against a vehicle parked temporarily in a “no parking zone” to allow the driver to unload heavy items of furniture for the purpose of transferring them into the adjacent house when this does not seriously inhibit the free flow of traffic. In other words, within traditional rule-based regulatory regimes, inescapable problems of inclusiveness and determinacy that arise at the rule-setting stage can be addressed at the enforcement stage through sensitive interpretation and application. Although human involvement in the application of rules can be a source of inconsistency and error, it also provides the vehicle through which the limitations of rules can be overcome in concrete contexts.

Design-Based Regulation , Agency, and Responsibility

Another central theme arising in debates concerning the legitimacy of regulatory regimes focuses on the accountability of regulatory agencies and institutions. These concerns are rooted in the need for mechanisms through which those in positions of authority whose decisions and activities have the power to affect others should, at least within liberal democratic politics, be held appropriately accountable for their actions. Within legal and political studies literature, the term “accountability ” is often used as a synonym for many loosely defined political desiderata, such as good governance, transparency, equity, democracy, efficiency, responsiveness, responsibility, and integrity.46 Mark Bovens suggests that, broadly speaking, scholarly analysis of accountability adopt two rather different conceptions: either as a virtue or virtuous behavior or as a mechanism or a specific social relation that involves an obligation to explain and justify conduct from the actor (the accounter) to a forum, the account holder, or accountee.47 It is this second sense of accountability that is typically the focus of discussion in debates about regulatory legitimacy. On this understanding, accountability usually involves not just the provision of information about performance but also the possibility of debate, of questions by the forum and answers by the actor, and eventually of judgment of the actor by the forum. Furthermore, judgment also implies the imposition of formal or informal sanctions on the actor in case of poor or unacceptable performance or rewards in cases of adequate or superior performance.48 So conceived, obligations of accountability can be understood as flowing from the position of responsibility which the accounter occupies in relation to the account holder. These obligations extend beyond offering an explanation of one’s actions but also require the accounter to take responsibility for the impact and effect of their decisions on others, including an obligation of responsiveness – to respond to the needs and demands of those to whom they are required to account, to make amends when things go wrong, or to make adjustments or changes to their proposed course of action when account holders so demand. Accountability can therefore be conceived as a product of the position of responsibility occupied by the designated account holder.49 Used in this sense, responsibility has a temporal element that looks in two directions.50 Notions of accountability and answerability look backwards to conduct and events of the past: what Peter Cane refers to as “historic responsibility.” In contrast, “prospective responsibilities” are future oriented, concerned with establishing obligations and duties – and are typically directed towards producing good outcomes (“productive responsibilities”) and preventing bad outcomes (“preventative responsibilities”).51 Regulators, like engineers, are typically understood as responsible in both senses, although the focus of much regulatory literature has been on the historic rather than prospective responsibilities of regulatory officials. Moreover, notions of responsibility and accountability can be understood from a variety of perspectives, depending upon the particular dimensions or kind of decision-making judgment required and considered salient: be it political, professional, financial, moral, scientific, administrative, or legal, to name but a few. In the following section, I shall consider some of the implications of design-based approaches to regulation on three different dimensions of responsibility: political, professional, and moral. Although a varied range of concerns have been expressed in relation to each of these dimensions of responsibility, they are ultimately rooted in the account holders’ discretionary power to trade off competing considerations and the need to ensure that – at least in regulatory contexts – those to whom decision-making authority is entrusted should be held accountable and responsible for the consequences of their judgments.

Design-Based Regulation and Political Responsibility

The regulation of social and economic activity in the last three to four decades in many industrialized economies has been accompanied by the increasing popularity of the independent regulatory agency as an institutional form.52 Such agencies are typically established by statute and endowed with statutory powers, yet are typically expected to operate at arm’s length from the government rather than being subject to regular ministerial direction. Although this institutional form has a long history, it was the proliferation of utility regulators following the privatization of state-owned natural monopolies and their subsequent regulation by independent regulatory agencies that began to attract scholarly attention.53 A number of benefits are claimed to be associated with the use of independent agencies in carrying out regulatory functions: their capacity to combine professionalism; operational autonomy; political insulation; flexibility to adapt to changing circumstances, continuity, and hence capacity to adopt a long-term perspective rather than being subject to the vagaries of the electoral cycle; as well as policy expertise in highly complex spheres of activity.54 Yet they have also attracted considerable criticism, primarily on the basis that such agencies lack democratic legitimacy and are not adequately accountable for their decisions.55 Because their decisions have a differential impact on individual and group interests, and frequently require them to make trade-offs between competing values and principles, the decisions of regulatory agencies can be understood as having political dimensions, underlining the need for mechanisms to promote democratic accountability in regulatory decision-making. Although regulatory agencies are typically subject to specific mechanisms of accountability, such as public reporting obligations to the parliament and accountability to the courts by way of judicial review of agency decision-making, those appointed to lead and manage such agencies are not elected, nor are they directly accountable to national legislatures or subject to direct ministerial control; hence, it is not surprising that complaints are frequently made that regulatory agency decisions lack democratic legitimacy.56

If political accountability is considerably weakened by the transfer of decision-making authority from democratically elected ministers to independent regulatory agencies when they employ traditional command-based approaches to regulation, there are reasons to believe that the use of design-based instruments exacerbates these weaknesses. Such concerns have been particularly potent in debates concerning the use of code-based approaches to regulating the Internet. Cyberscholar and constitutional lawyer Lawrence Lessig has famously claimed that, within cyberspace, “code is law,” observing how software code operates to restrict, channel, and otherwise control the behavior of Internet users.57 Two related sets of concerns have arisen focused upon the potential for code-based regulation to undermine democratic accountability. First, it is claimed that when employed by the state, code-based regulation may antagonize several constitutional principles: its operation may be opaque and difficult (if not impossible) to detect, thereby seriously undermining the transparency of regulatory policy; the lack of transparency diminishes the accountability of those responsible for installing and operating code-based controls; and as is the extent to which affected individuals may participate in the setting of such controls before they are installed or to challenge or appeal against such policies after they have been imposed.58 As a result, both authoritarian and libertarian governments alike can enforce their wills much more easily than they could through more traditional command-based approaches, yet without the knowledge, consent, or cooperation of those they govern.59 Secondly, when code-based approaches are employed by non-state actors in pursuit of private goals, particularly by extraordinary powerful commercial entities such as Google, Amazon, and Facebook, this may subvert or override the legislatively authorized balance of values, profoundly altering the balance of power between governments and the governed.60

Similar kinds of concerns have been targeted at the use of “nudge” techniques to encourage individuals to behave in ways deemed desirable by the “nudger.” Although controversy over the legitimacy of such techniques has been wide-ranging, for present purposes it concerns about the transparency of such techniques that are of particular salience.61 For example, open Internet campaigners have criticized (and ridiculed) the UK government’s default Internet filtering policy aimed at reducing children’s access to Internet porn, based on concerns about the consequent loss of transparency, accountability, and due process entailed by the policy, questioning whether the ISPs implementing the Internet filters would be responsible for incorrect blocks and financially liable to those suffering economic loss as a result.62 Even Thaler and Sunstein acknowledge that the nudge techniques they advocate can be likened to subliminal advertising in that some nudges may be insidious, empowering governments to “manoeuvre people in its preferred directions, and at the same time provide officials with excellent tools by which to accomplish this task.”63 They therefore propose a form of John Rawls’s publicity principle as a limit on the legitimate use of nudges prohibiting governments from adopting policies that they would not be able or willing to defend publicly on its own grounds.64 But judging from the response of the US and UK governments to recent revelations following whistle-blower and former US intelligence contractor Edward Snowden’s disclosure of the extent to which US and UK intelligence agencies have been monitoring digital and other forms of communications of their own citizens without their consent, openly defending their actions with little or no apparent embarrassment, this proposed principle is unlikely to provide much of a safeguard.65

Design-Based Regulation and Professional Responsibility

Concerns about the ways in which design-based approaches to regulation may entail the exercise of political judgment involving trade-off between conflicting values and interests, yet are not subject to effective mechanisms to ensure that those exercising such judgments are rendered responsible and accountable for doing so, have direct parallels in professional contexts. In particular, debates about the appropriate use of design to promote the goal of avoiding unintended errors by medical practitioners in the provision of healthcare highlight how the use of design to foster laudable goals such as ensuring patient safety in the practice of medicine and public health provision may have significant and potentially troubling implications for professional agency and responsibility.

Traditionally, professional and legal standards have been relied on to secure patient safety, based on an agent-centered approach to regulation. In particular, the regulation of the medical profession has historically taken the form of professional “self-regulation” in which an association of medical professionals established in corporate form seeks to exert control over individual doctors by controlling entry to the profession through a system of licensing for approved agents, largely leaving individual members to exercise their own agency in behaving well and complying with the association’s code of conduct.66 Such an approach relies heavily on individuals internalizing and cooperating with the collective norms of the professional group. The effectiveness of this mechanism has been the focus of powerful critiques. The most trenchant critiques express skepticism of the two major claims that underpin faith in professional agency: the expertise claim (doctors have specialist, distinctive knowledge and skills that are inscrutable to others) and the moral claim (doctors will reliably act in the interests of their patients and apply their expertise diligently to secure those interests).67 Thus, patient safety failures are seen as inevitable because the character, conscientiousness, competence, and good motives of individual agents cannot be satisfactorily relied on to ensure patient welfare, and the profession as a corporate body is incapable of ensuring that its members comply with the appropriate standards. Another, more sympathetic critique also regards reliance on individual agents as an ineffective means for ensuring patient safety but derives from rather different assumptions. It is based on recognition that humans inevitably make errors, so that not only is human agency ineffective as a means for avoiding or reducing error but it is also unfair and unhelpful to doctors.68 It therefore advocates a focus on the conditions under which individuals work, emphasizing the systemic and environmental causes of error, with the aim of constructing defenses to avert or mitigate them.69 This gives rise to a “systems-based” approach, which seeks to prevent errors through careful system design, rather than a reliance on the competence and conscientiousness of individual agents.70 Hence, a systems-based approach to patient safety emphasizes the ways in which the architecture or design of healthcare settings can be “mistake-proofed” thereby making it impossible or considerably more difficult for practitioners to cause harm.

One form of design-based mistake-proofing involves the use of action-forcing design. Wrong-route drug administration is a commonly used example of the kind of behavior that could be avoided by action-forcing design. This is often seen as an egregious patient safety error. Importantly, it typically occurs unintentionally (either when an error in planning is made, i.e., someone plans to give a patient a drug through the wrong route, without realizing the wrong route, or when an error of execution is made – i.e., someone does not plan to give a patient through the wrong route but accidentally does so). Both the action and outcome are unintended – the results of lapses. A classic example is administration of vincristine (chemotherapy) via the intrathecal route (the spine) which has very serious (usually fatal) consequences.71 Redesigning equipment could mean that it would no longer be possible to connect a normal hypodermic syringe to a spinal device, making it impossible inadvertently to administer the drug intrathecally. Such a solution is likely to be welcomed across various stakeholders and interests in the same way as the redesign of anesthetic equipment to prevent nitrous oxide being administered instead of oxygen has enjoyed widespread legitimacy.

But, the simplicity, appeal, and likely effectiveness of an action-forcing design solution conceal underlying ethical controversy in the healthcare context, where definitions of risk, morality, and error are often highly contested and where professional agency has traditionally had an important role. Wrong-route drug administration is universally regarded as a serious medical error. Yet in other circumstances where there is contestation in defining what actions constitute an error, who should own the definition and the conditions in which such actions should be prevented, design-based regulation becomes considerably more problematic. Consider, for example, the use of design-based approaches to prevent the reuse of medical devices in order to reduce infection risk arising from ineffective sterilization or damage to reusable equipment. In many countries, official policy is that any devices designated as single-use (“SUD”) must not be reused.72 On one view, any reuse of a SUD would be an error. Yet the reuse of medical devices labeled “single-use only” by manufacturers is highly contentious; there is little consensus on how far reuse constitutes a genuine safety risk for many devices. Some argue that when appropriate precautions are taken, reuse can very often be justifiable for many such devices where there is a very low risk associated with reuse and also that there are good environmental and economic reasons to do so.73 Practitioners may perceive that manufacturers are overcautious and self-serving in their instructions cautioning against reuse. Hence, they may intentionally reuse equipment even though they intend no harm, especially if this prevents waste and allows more patients to benefit from limited resources.74

Yet manufacturers have increasingly been designing single-use devices in such a way as to render them auto-disabling, thereby preventing reuse (e.g., single-use needles, self-blunting needles). Seen in light of considerable contestation about the legitimacy of reusing medical devices designated as single-use only, these manufacturing innovations take on a much more problematic guise. Rather than being neutral, value-free interventions, action-forcing design imposes an action that favors one particular judgment about what constitutes an “error” and what constitutes “safe” medical practice. Unlike wrong-route drug administration, the lack of consensus about the reuse of single-use devices means that action-forcing design that prevents reuse may encode a technical notion of risk that may appear objective but serves to obscure normative and programmatic commitments on the part of designers. Not only does this crowd out doctors’ professional discretion and accountability for making value judgments about the appropriate balance between patient safety, economic prudence, and environmental sustainability, but it may also serve to exclude stakeholder participation in the setting of standards and allow penetration of commercial and other interests for which there is little transparency or public accountability.75

Design-Based Regulation and Moral Responsibility

The need to exercise individual judgment in trading off competing values and concerns also has important moral analogues. Just as the turn to design-based approaches to regulation has significant implications for political and professional judgment and responsibility, it also raises potentially more profound implications for our understanding and practice of moral judgment and responsibility. Concerns about the potential for design-based approaches to erode or otherwise undermine moral responsibility are evident in a range of different literatures from various disciplines when used to shape or channel social behavior. For example, leading criminologist David Garland refers to “situational crime prevention” as a “set of recipes for steering and channelling behaviour in ways that reduce the occurrence of criminal events. Its project is to use situational stimuli to guide conduct towards lawful outcomes, preferably in ways that are unobtrusive and invisible to those whose conduct is affected.”76 While Garland explains the political appeal of these strategies for governments by offering a more immediate form of security to potential victims, and one that can be increasingly commercialized through the involvement of private sector providers, Duff and Marshall worry that such techniques may express a lack of respect for individuals, implying that individuals are incapable of responding to appeals to moral reasoning or exercising self-control and restraint.77 In a different but related vein, applied ethicists have raised concerns about allowing the use of technological approaches to enhancing individual traits and capabilities when used collectively to promote nonmedical goals. So, for example, Allan Buchanan argues that the quest for economic growth is likely to result in state support for the use of human enhancement technologies that improve industrial productivity while advances in neuroscientific knowledge have provoked a resurgence of interest in “biological approaches to crime control.”78 Although these controversies are of relatively recent origin, they can also be understood as offering contemporary applications of much more long-standing controversies about the moral and ethical legitimacy of using design ostensibly to shape human progress and flourishing such as concerns surrounding the legitimacy of water fluoridation to reduce population level tooth decay,79 state-sponsored vaccination programs to prevent and limit the spread of infectious disease,80 and state-sponsored eugenic programs aimed at breeding a superior species.81

Taken together with concerns expressed about the potential for design-based approaches to undermine democratic responsibility,82 these apparently disparate critiques reflect a common set of anxieties that the use of design-based approaches for influencing human affairs could threaten the moral and social foundations to which individual freedom, autonomy, and responsibility are anchored. These foundational concerns have been alluded to by legal scholars Roger Brownsword and Ian Kerr. Both fear that, when used on a cumulative and systematic basis, such approaches may fatally jeopardize the social foundations upon which moral community rests. Hence, Brownsword fears that the use of action-forcing design (which he terms “techno-regulation”) entails more than a loss of moral responsibility but in a direct and unmediated way excludes moral responsibility because individuals who are forced by the designed environment that they inhabit to act in particular ways can no longer be regarded as morally responsible.83 Because action-forcing design deprives agents of the opportunity to choose how to behave, it deprives them of the opportunity to exercise moral judgment. Similarly, Ian Kerr foreshadows the potential social consequences of a more generalized strategy that relies upon action-forcing technologies (which he refers to as “digital locks”). For him, such approaches may stultify our moral development by eliminating the possibility for moral deliberation about certain kinds of action yet leave “no room for forgiveness.” 84 He therefore fears that “a successful, state-sanctioned, generalized deployment of digital locks actually impedes the development of moral character by impairing people’s ability to develop virtuous dispositions, thereby diminishing our well-being and ultimately undermining human flourishing.”85

Although Brownsword’s concerns that action-forcing technologies eliminate moral agency are, in my view, overstated – at least in circumstances where agents have an adequate range of alternatives to act in ways that they consider morally right or wrong – nevertheless both Brownsword and Kerr are rightly fearful of the implications for our moral foundations of a systematic shift in favor of such measures for implementing public policies. Such a shift is considerably more likely to arise incrementally and cumulatively, rather than through a single highly visible change in regulatory approach at a discrete point in time, and hence much more likely to escape public notice. Not only do we need to reflect carefully on the moral risks posed by particular design-based regulatory technologies considered in isolation, but particular vigilance is needed in attending to the systemic moral risks associated with design-based regulation, including the articulation of an analytical framework that can assist in conceptualizing, analyzing, and debating the collective shift towards design-based approaches to regulation.86


This paper has shown how design can be employed as an instrument of regulatory control, used intentionally by state and non-state actors in particular contexts for the purposes of producing broadly defined outcomes which affect others. Because design can be employed for regulatory purposes without necessarily seeking to elicit a change in the external behavior of others, particularly in the case of harm mitigation technologies, I have suggested that Julia Black’s definition of regulation should be refined in a way that will allow such design-based approaches to be included within the regulatory scholar’s field of vision. Drawing upon two significant themes and literatures within regulatory scholarship, the first concerning regulatory tools and instruments and the second concerned with the accountability and legitimacy of regulatory agencies, I have demonstrated how a regulatory perspective can illuminate important ethical debates that may arise when design is employed for regulatory purposes. For regulatory authorities, the attractions of design lie in their self-enforcing capacities, thereby avoiding both the expense and potential for the improper exercise of authority by individuals entrusted with the task of enforcing regulatory rules while securing the swift and effective achievement of regulatory goals. Where there is strong consensus about the kinds of behaviors and activities considered undesirable, then appropriately formulated design-based interventions may deliver considerable benefits and command widespread acceptance by regulators and those they regulate.

But even where such consensus exists, I have shown how difficulties associated with the setting of standards in traditional linguistic, rule-based form are likely to be exacerbated, rather than diminished, through the incorporation of regulatory standards into the fabric of design, at least in circumstances where unforeseen circumstances arise which have not been contemplated by designers. Nor are attempts to shape social outcomes through design, rather than more traditional policy approaches, likely to overcome or avoid controversies associated with the accountability and responsibility of regulators. Rather, because design-based approaches to regulation seek to encode standards into the fabric of design, some mechanism may be needed to resolve the inevitable trade-offs between conflicting values and interests in concrete contexts. Design makes it possible to encourage or compel actions or outcomes deemed by regulators as desirable in ways that both obscure and deepen concerns about their political accountability. When used to guide and shape professional judgment, including clinical decision-making by doctors, the use of design-based approaches to promote “good” medical practice can be controversial, at least in situations where there is a lack of consensus about what constitutes “good” clinical practice yet the design operates to preclude certain kinds of activities (such as the reuse of single-use medical devices) and may both complicate and erode the professional accountability of clinicians. Finally, and perhaps most worryingly, the use of design-based approaches to regulation has the potential to undermine moral responsibility and accountability, at least in circumstances where the turn to design-based approaches to regulation becomes so systematic and routine that it results in a significant erosion of the extent to which individual agents are left free to make their own moral judgments and act upon them accordingly.

A significant theme emerging in recent philosophy of technology literature focuses on the interface between responsibility and engineering, highlighting how engineering and technology increasingly shape the context of human actions, and therefore informs and influences how responsibility in both its prospective and retrospective senses is understood and distributed.87 In common with regulatory accountability scholarship, this literature reflects a shared concern that those who wield the power to trade off competing values that may affect the rights, interests, and legitimate expectations of others should be appropriately held to account to those affected others, enabling them to seek redress, appeal, or prompt reconsideration of past decisions or prospective policies in light of their feedback and experience. Just as scholars of engineering ethics have drawn attention to the trade-offs between values that may be involved in the engineering design process, so also have regulatory scholars sought to identify and evaluate the extent and adequacy with which regulators of all stripes, within a varied range of institutional and policy contexts, are held accountable and responsible for the way in which they have traded off conflicting values and interests in carrying out their regulatory duties. These debates are likely to intensify rather than subside, as our technological knowledge and capacity continues to advance, thereby opening up the possibility of more powerful, precise, and invasive regulatory design strategies than our forefathers could possibly have imagined.


  1. 1.

    Kerr and Bailey (2004), Ganley (2002)

  2. 2.

    Black (2001), Daintith (1997)

  3. 3.

    Black, ibid 142

  4. 4.

    Black, ibid

  5. 5.

    Winner (1980), Jelsma (2003), Akrich (1992)

  6. 6.

    There are few spheres of economic activity that are not subject to some form of regulatory oversight and control, and daily news programs rarely pass without some mention of a significant regulatory decision, proposed regulatory reform, or allegations of some regulatory failure or scandal. Instances of alleged regulatory failure have been prominent throughout the late twentieth and early twenty-first century, including food safety (BSE in the 1990s and early 2000s), oil platforms (Piper Alpha in 1990, Deepwater Horizon in 2010), nuclear safety (Fukushima in 2011), and financial markets (Barings in 1995, the financial crisis post 2008)

  7. 7.

    Baldwin et al. (2010)

  8. 8.

    Hood et al. (2001)

  9. 9.

    Some of these applications are referred to below

  10. 10.

    Morgan and Yeung (2007), Chapter 3

  11. 11.

    Wadden et al. (2002)

  12. 12.

    For example, New York City has banned the use of artificial trans fat in food service establishments in the city aimed at reducing the rate of heart disease (see Mello (2009))

  13. 13.

    For example, Hungary has introduced a “fat tax” in an effort to combat obesity, and several US states have imposed an excise duty on sugar-sweetened beverages, partly motivated by a desire to combat obesity (see Cabrera Escobar et al. (2013))

  14. 14.

    For example, in the UK, a national public awareness program including a public education campaign exhorting people to eat at least five portions of fruit and vegetables a day (the “5 A Day” program) was launched in 2002 to raise awareness of the health benefits of fruit and vegetable consumption and to improve access to fruit and vegetables (see (Accessed on 25 Nov 2013)

  15. 15.

    For example, mandatory food labelling requirements imposed by EU law are considered by the European Commission as a central plank of the EU’s obesity prevention strategy (see Garde (2007))

  16. 16.

    For example, some US insurance companies and employers participate in wellness programs, pursuant to which employees are offered incentives in return for undertaking health-enhancing behaviors (see Mello and Rosenthal (2008))

  17. 17.

    For a discussion of design-based approaches to regulation more generally, see Yeung (2008, 2016)

  18. 18.

    Katyal (2002)

  19. 19.

    Stier et al. (2007)

  20. 20.

    Of course, if a state proposed to implement any of these strategies, it would raise serious concerns about their legitimacy, particularly in relation to individuals who did not consent to the intervention, but these issues are beyond the scope of this paper (see Yeung (2015) supra n 7)

  21. 21.

    Sempos et al. (2000)

  22. 22.

    Paul et al. (2011)

  23. 23.

    Krista et al. (2013)

  24. 24.

    The European Food Safety Authority Panel has issued guidance on the environmental risk assessment of genetically modified animals, which includes insects, birds, fish, farm animals, and pets (EFSA Panel on Genetically Modified Organisms (2013))

  25. 25.

    See BBC News (2013)

  26. 26.

    The Roslin Institute, University of Edinburgh (2013)

  27. 27.

    See, for example, Harris (2012) and Farah et al. (2004)

  28. 28.

    Nuffield Council on Bioethics (2002)

  29. 29.

    Foster (2006)

  30. 30.

    Thaler and Sunstein (2008)

  31. 31.

    Ibid 6

  32. 32.

    Ibid, 93

  33. 33.

    The Rt Honourable David Cameron MP (2013)

  34. 34.

    The Independent (2013)

  35. 35.

    Nanoscience is currently being developed with a view to understanding how nanostructures contribute to the properties of food, thereby enabling food producers develop innovative ways of making similar products from different ingredients, for example, by removing most of the fat from ice cream without losing the smooth and creamy texture that consumers expect from that type of product (Ministerial Group on Nanotechnologies (2010))

  36. 36.

    Black, above n 3

  37. 37.

    Black (2008)

  38. 38.

    Yeung (2004)

  39. 39.

    Baldwin et al. (2012), Chapter 14

  40. 40.

    See, for example, Balwin (1995), Black (1997), Diver (1999), Schauer (1991)

  41. 41.

    Black, ibid

  42. 42.

    Hart (1961)

  43. 43.

    Lyons (2011)

  44. 44.

    For a discussion and critique of self-enforcement in the context of “tethered” digital appliances, see Zittrain (2007)

  45. 45.

    See, for example, Hawkins (1984, 2002), Hutter (1997), Grabosky and Braithwaite (1985)

  46. 46.

    Bovens (2010) and the literature cited therein

  47. 47.

    Ibid, 949–951

  48. 48.


  49. 49.

    Gardner (2006)

  50. 50.

    Cane (2002)

  51. 51.


  52. 52.

    Levi-Faur (2005)

  53. 53.

    Black (2007)

  54. 54.

    Levi-Faur, above n 51; Levy and Spiller (1996)

  55. 55.

    See, for example, Graham (1998), Baldwin (1996), Yeung (2011a), Scott (2000)

  56. 56.


  57. 57.

    Lessig (1999) drawing on the insight provided by Joel Reidenberg (Reidenberg (1998))

  58. 58.

    Citron (2008)

  59. 59.

    Lessig, ibid

  60. 60.


  61. 61.

    See, for example, Bovens (2008), White (2010), Yeung (2012), Rizzo and Whitman (2009), Schlag (2010)

  62. 62.

    The UK Open Rights Group, for example, argues that because this measure has been introduced without any legislative authority, the public appears to have no recourse when things go wrong, and there will be no one to pressurize (see the Open Rights Group Campaign (2013))

  63. 63.

    Thaler and Sunstein above n 30, 244

  64. 64.

    Ibid 244–245

  65. 65.

    The disclosures by Edward Snowden and the responses of various heads of the government have received very extensive media coverage. On the response of the US administration, see, for example, K Connolly “Barack Obama: NSA is not rifling through ordinary people’s emails,” The Guardian, London, 19 June 2013; on the response of the UK administration, see, for example, S Jenkins “Britain’s response to the surveillance scandal should ring every alarm bell,” The Guardian, 4 November 2013

  66. 66.

    Rostain (2010)

  67. 67.

    Friedson (1973)

  68. 68.

    Merry and McCall Smith (2001)

  69. 69.

    Reason (2000)

  70. 70.

    Department of Health (2000), Kohn et al. (2000)

  71. 71.

    For example, British teenager Wayne Jowett died in Nottingham, England, in 2001 following intrathecal administration of vincristine (Toft (2001))

  72. 72.

    For example, Medicines and Healthcare Devices Regulatory Authority (2006)

  73. 73.

    Kwayke et al. (2010)

  74. 74.

    Smith et al. (2006), Dickson

  75. 75.

    For a fuller analysis, see Yeung and Dixon-Woods (2010)

  76. 76.

    Garland (2000)

  77. 77.

    Duff and Marshall (2000)

  78. 78.

    Raine (2013)

  79. 79.

    Connett et al. (2010), Peckham (2012)

  80. 80.

    Colgrove (2006)

  81. 81.

    Romero-Bosch (2007)

  82. 82.
  83. 83.

    Brownsword (2006)

  84. 84.

    Kerr (2010)

  85. 85.


  86. 86.

    For one suggested approach, drawing on common pool resource theory and the “tragedy of the commons” (see Yeung (2011b))

  87. 87.

    Doorn and van de Poel (2012)


  1. Akrich M (1992) The description of technical objects. In: Bijker W, Law J (eds) Shaping technology. MIT Press, Cambridge, MAGoogle Scholar
  2. Baldwin R (1995) Rules and government, Oxford socio-legal studies. Clarendon, OxfordGoogle Scholar
  3. Baldwin R, Cave M, Lodge M (2010) Introduction: regulation – the field and the developing agenda. In: Baldwin R, Cave M, Lodge M (eds) The Oxford handbook of regulation. Oxford University Press, OxfordCrossRefGoogle Scholar
  4. Baldwin R, Cave M, Lodge M (2012) Understanding regulation, 2nd edn. Oxford University Press, New YorkGoogle Scholar
  5. Balwin R (1995) Rules and government. Oxford University Press, New YorkGoogle Scholar
  6. Black J (1997) Rules and regulators. Clarendon, OxfordCrossRefGoogle Scholar
  7. Black J (2001) Decentring regulation: understanding the role of regulation and self-regulation in a ‘post-regulatory’ world. Curr Leg Probl 54:103CrossRefGoogle Scholar
  8. Black J (2007) Tensions in the regulatory state. Public Law 58Google Scholar
  9. Black J (2008) Constructing and contesting legitimacy and accountability in polycentric regulatory regimes. Regul & Govern 2(2):137–164CrossRefGoogle Scholar
  10. Bovens L (2008) The ethics of Nudge. In: Grune-Yanoff T, Hansson SO (eds) Preference change: approaches from philosophy, economics and psychology. Springer, Dordrecht, HeidelbergGoogle Scholar
  11. Bovens M (2010) Two concepts of accountability: accountability as a virtue and as a mechanism. West Eur Polit 33:946CrossRefGoogle Scholar
  12. Brownsword R (2006) Code, control, and choice: why east is east and west is west. Legal Stud 25:1CrossRefGoogle Scholar
  13. Cabrera Escobar MA et al (2013) Evidence that a tax on sugar sweetened beverages reduces the obesity rate: a meta-analysis. BMC Public Health 13:1072CrossRefGoogle Scholar
  14. Cane P (2002) Responsibility in law and morality. Hart Publishing, Oxford and Portland, Oregon, UK, 31Google Scholar
  15. Citron DK (2008) Technological due process. Wash Univ Law Rev 85:1249Google Scholar
  16. Colgrove J (2006) State of immunity: the politics of vaccination in twentieth century America. University of California Press, BerkeleyGoogle Scholar
  17. Connett P, Beck J, Spedding Micklem H (2010) The case against fluoride: how hazardous waste ended up in our drinking water and the bad science and powerful politics that kept it there. Chelsea Green Publishing, White River Junction, VTGoogle Scholar
  18. Daintith T (1997) Regulation. In: Buxbaum R, Madl F (eds) International encyclopedia of comparative law, vol XVII, State and economy. JCB Mohr (Paul Siebeck), TübingenGoogle Scholar
  19. Department of Health (2000) An organisation with a memory. Department of Health, LondonGoogle Scholar
  20. Dickson DE (1999) Rapid response to: controversy erupts over reuse of ‘single use’ medical devices.
  21. Diver CS (1999) The optimal precision of administrative rules. In: Baldwin R, Hood C, Scott C (eds) A reader on regulation. Oxford University Press, New YorkGoogle Scholar
  22. Doorn N, van de Poel I (2012) Editor’s overview: moral responsibility in technology and engineering. Sci Eng Ethics 18:1CrossRefGoogle Scholar
  23. Duff R, Marshall S (2000) Benefits, burdens and responsibilities: some ethical dimensions of situational crime prevention. In: von Hirsch A, Garland D, Wakefield A (eds) Ethical and social perspectives on situational crime prevention. Hart Publishing, Oxford, p 17Google Scholar
  24. EFSA Panel on Genetically Modified Organisms (2013) Guidance on the environmental risk assessment of genetically modified animals. EFSA J 11:3200Google Scholar
  25. Farah MJ et al (2004) Neurocognitive enhancement: what can we do and what should we do? Nat Rev Neurosci 5:421CrossRefGoogle Scholar
  26. Foster KR (2006) Engineering the brain. In: Illes J (ed) Neuroethics. Oxford University Press, New YorkGoogle Scholar
  27. Friedson E (1973) Profession of medicine. The University of Chicago Press, ChicagoGoogle Scholar
  28. Ganley P (2002) Access to the individual: digital rights management systems and the intersection of informational and decisional privacy interests. Int J Law Inf Technol 10:241CrossRefGoogle Scholar
  29. Garde A (2007) The contribution of food labelling to the EU’s obesity prevention strategy. Eur Food Feed Law Rev 6:378Google Scholar
  30. Gardner J (2006) The mark of responsibility (with a postscript on accountability). In: Dowdle MW (ed) Public accountability. Cambridge University Press, New YorkGoogle Scholar
  31. Garland D (2000) Ideas, institutions and situational crime prevention. In: Garland D (ed) Ethical and social perspectives on situational crime prevention. Hart Publishing, Portland, Oregon, p 1Google Scholar
  32. Grabosky P, Braithwaite J (1985) Of manners gentle – enforcement strategies of Australian business regulatory agencies. Oxford University Press, MelbourneGoogle Scholar
  33. Graham C (1998) Is there a crisis in regulatory accountability? In: Baldwin R, Scott C, Hood C (eds) A reader on regulation. Oxford University Press, New YorkGoogle Scholar
  34. Harris J (2012) Chemical cognitive enhancement: is it unfair, unjust, discriminatory, or cheating for healthy adults to use smart drugs? In: Illes J, Sakakian BJ (eds) The Oxford handbook of neuroethics. Oxford University Press, New YorkGoogle Scholar
  35. Hart HLA (1961) The concept of law. Oxford University Press, New York, 128Google Scholar
  36. Hawkins K (1984) Environment and enforcement. Clarendon, New YorkGoogle Scholar
  37. Hawkins K (2002) Law as last resort. Oxford University Press, New YorkGoogle Scholar
  38. Medicines and Healthcare Devices Regulatory Authority (2006) Single use devices: implications and consequences of re-use. MHRA Device Bull DB2006(04)Google Scholar
  39. Hood C, Baldwin R, Rothstein H (2001) The government of risk. Oxford University Press, Oxford, 21CrossRefGoogle Scholar
  40. Hutter B (1997) Compliance: regulation and environment, Oxford socio-legal studies. Clarendon Press, OxfordGoogle Scholar
  41. Jelsma J (2003) Innovating for sustainability: involving users, politics and technology. Innovation 16:103Google Scholar
  42. Katyal NK (2002) Architecture as crime control. Yale Law J 111:1039CrossRefGoogle Scholar
  43. Kerr I (2010) Digital locks and the automation of virtue. In: Geist M (ed) From “radical extremism” to “balanced copyright”: Canadian copyright and the digital agenda. Irwin Law, TorontoGoogle Scholar
  44. Kerr I, Bailey J (2004) The implications of digital rights management for privacy and freedom of expression. J Inf Commun Ethics Soc 2:87Google Scholar
  45. Kohn LT et al (2000) To err is human: building a safer health system. National Academy Press, Washington, DCGoogle Scholar
  46. Kwayke G, Provonost P, Makary M (2010) A call to go green in health care by reprocessing medical equipment. Acad Med 85:398CrossRefGoogle Scholar
  47. Lessig L (1999) Code and other laws of cyberspace. Basic Books, New YorkGoogle Scholar
  48. Levi-Faur D (2005) The global diffusion of regulatory capitalism. Ann Am Acad Pol Soc Sci 598:12CrossRefGoogle Scholar
  49. Levy B, Spiller PT (1996) Regulators, institutions and commitment. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  50. Lyons MK (2011) Deep brain stimulation: current and future clinical applications. Mayo Found 86:662Google Scholar
  51. Mello M (2009) New York city’s war on fat. N Engl J Med 360:2015CrossRefGoogle Scholar
  52. Mello MM, Rosenthal MB (2008) Wellness programs and lifestyle discrimination – the legal limits. N Engl J Med 359:192CrossRefGoogle Scholar
  53. Merry A, McCall Smith RA (2001) Errors, medicine and the law. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  54. Ministerial Group on Nanotechnologies (2010) UK nanotechnologies strategy: small technologies, great opportunities. London, 28Google Scholar
  55. Morgan B, Yeung K (2007) An introduction to law and regulation. Cambridge University Press, Cambridge, UKCrossRefGoogle Scholar
  56. BBC News (2013) Can genetically modified mosquitoes prevent disease in the US?.
  57. Nuffield Council on Bioethics (2002) Genetics and human behaviour: the ethical context. Nuffield Council on Bioethics, LondonGoogle Scholar
  58. Oke KB et al (2013) Hybridization between genetically modified Atlantic salmon and wild brown trout reveals novel ecological interactions. Proc R Soc 280:20131047CrossRefGoogle Scholar
  59. Open Rights Group (2013) Campaign ‘stop opt-out’ “adult” filtering. Accessed 25 Nov 2013
  60. Paul M et al (2011) Molecular pharming: future targets and aspirations. Hum Vaccin 7:375CrossRefGoogle Scholar
  61. Peckham S (2012) Slaying sacred cows: is it time to pull the plug on water fluoridation? Crit Publ Health 22:159CrossRefGoogle Scholar
  62. Raine A (2013) Dickson anatomy of violence. Allen Lane, London, pp 329–373Google Scholar
  63. Reason J (2000) Human error: models and management. Brit Med J 320:768CrossRefGoogle Scholar
  64. Reidenberg JR (1998) Lex informatica: the formulation of information policy rules through technology. Texas Law Rev 76:553Google Scholar
  65. Rizzo MJ, Whitman DG (2009) Little brother is watching you: new paternalism on the slippery slopes. Ariz Law Rev 51:685Google Scholar
  66. Romero-Bosch A (2007) Lessons in legal history – eugenics and genetics. Mich St J Med Law 1:89Google Scholar
  67. Rostain T (2010) Self-regulatory authority, markets, and the ideology of professionalism. In: Baldwin R, Lodge M, Cave M (eds) The Oxford handbook of regulation. Oxford University Press, New YorkGoogle Scholar
  68. Schauer FS (1991) Playing by the rules. Clarendon, OxfordGoogle Scholar
  69. Schlag P (2010) Nudge, choice architecture and libertarian paternalism. Mich Law Rev 108:913Google Scholar
  70. Scott C (2000) Accountability in the regulatory state. J Law Soc 27:38CrossRefGoogle Scholar
  71. Sempos CT, Park YK, Barton CN, Vanderveen JE, Yetley EA (2000) Effectiveness of food fortification in the United States: the case of pellagra. Am J Publ Health 90:727CrossRefGoogle Scholar
  72. Smith AF et al (2006) Adverse events in anaesthetic practice: qualitative study of definition, discussion and reporting. Brit J Anaesth 96:715CrossRefGoogle Scholar
  73. Stier DD, Mercy JA, Kohn M (2007) Injury prevention. In: Goodman RA et al (eds) Law in public health practice. Oxford University Press, New YorkGoogle Scholar
  74. Thaler R, Sunstein C (2009) Nudge. Penguin Books, LondonGoogle Scholar
  75. The Independent (2013) It is a slow metabolism after all: Scientists discover obesity gene. Accessed 13 Nov 2013
  76. The Roslin Institute, University of Edinburgh (2013) ‘GM chickens that don’t transmit bird flu’. Accessed 13 Nov 2013
  77. The Rt Honourable David Cameron MP (2013) The internet and pornography: prime minister calls for action. Accessed 25 Nov 2013
  78. Toft B (2001) External inquiry into the adverse incident that occurred at Queen’s Medical Centre, Nottingham. Department of Health, LondonGoogle Scholar
  79. Wadden TA, Brownell KD, Foster GD (2002) Obesity: responding to the global epidemic. J Consult Clin Psychol 70:510CrossRefGoogle Scholar
  80. White MD (2010) Behavioural law and economics: the assault on the consent, will and dignity. In: Gaus G, Favor C, Lamont J (eds) Essays on philosophy, politics & economics: integration and common research projects. Stanford University Press, Stanford, CaliforniaGoogle Scholar
  81. Winner L (1980) Do artifacts have politics? Daedalus 109:121Google Scholar
  82. Yeung K (2004) Securing compliance. Hart Publishing, OxfordGoogle Scholar
  83. Yeung K (2008) Towards an understanding of regulation by design. In: Brownsword R, Yeung K (eds) Regulating technology. Hart Publishing, OxfordGoogle Scholar
  84. Yeung K (2011a) The regulatory state. In: Baldwin R, Cave M, Lodge M (eds) Oxford handbook on regulation. Oxford University Press, OxfordGoogle Scholar
  85. Yeung K (2011b) Can we employ design-based regulation while avoiding brave new world. Law Innov Technol 3:1CrossRefGoogle Scholar
  86. Yeung K (2012) Nudge as fudge. Mod Law Rev 75:122CrossRefGoogle Scholar
  87. Yeung K (2016) Is design-based regulation legitimate?’. In: Brownsword R, Scotford E, Yeung K (eds) The Oxford handbook on the law and regulation of technology. Oxford University Press, Oxford, forthcomingGoogle Scholar
  88. Yeung K, Dixon-Woods M (2010) Design-based regulation and patient safety: a regulatory studies perspective. Soc Sci Med 71:502CrossRefGoogle Scholar
  89. Zittrain J (2007) Tethered appliances, software as service, and perfect enforcement. In: Brownsword R, Yeung K (eds) Regulating technologies. Hart Publishing, Portland, OregonGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.The Centre for Technology, Ethics, Law and Society (TELOS), The Dickson Poon School of LawKing’s College LondonLondonUK

Personalised recommendations