Whenever a novel technology is introduced, stakeholders involved promise huge benefits for the future, but sometimes they get nervous. Will the public see it the same way? While many technologies have become an appreciated part of our daily lives, others such as agricultural biotechnology have met reluctance or rejection among the public. With a new technologyFootnote 1 such as synthetic biology,Footnote 2 the question many ask themselves is whether history will repeat itself, i.e. whether there will be a public controversy. Can we learn from past experiences in order to avoid a controversy in the future? Rather than assessing whether the comparison with past debates over biotechnology is substantiated, in this discussion paper I will argue that while comparisons may provide insights, the instrumental focus on ‘learning’ in order to ease technology introduction is misplaced and points to a skewed perception of the role of social scientists. To this end, I will briefly address (i) new converging technologies and their possible public perception; (ii) how nanotechnology has fared in comparison; (iii) some elements influencing public debate; (iv) the case of synthetic biology and (v) some possible topics of a future controversy. In the last part (vi), the role of social scientists will be addressed.

Converging technology perceptions

Over the last 50 years, a series of so-called key technologies such as nuclear power, information technology or biotechnology have been in the focus of policy makers. To gain a competitive advantage here was said to be a precondition for every industrialised nation to keep on top. Today, a number of new ones such as nano- and cognitive technology have been added. Rather than replacing each other, they are said to converge and give rise to unforeseen novel technologies that may enable developments on various fields and deeply influence the way we live (Roco and Bainbridge 2002; Nordmann 2004). Synthetic biology has been considered to be such a ‘converging’ technology (de Vriend 2006).Footnote 3 It is part of modern biology, but other disciplines such as chemistry, computer science and engineering have added to its genesis and development. Apart from interdisciplinary research the term convergence emphasises unprecedented progress in creating the next wave of key technologies. It is often associated with the idea of a race for competitive advantages involving several technologies at the same time.

Such a technological race does not always go undisputed. In the past, several key technologies such as nuclear power and some aspects of biotechnology have met criticism. The question with many stakeholders is whether new buzzwords such as ‘nano’ today and, possibly, ‘synbio’ in the future will be perceived as indicating something new or as denoting an extension of previous technologies (IRGC 2008), and which of the ‘mother’ technology will determine their public perception. In fact, ‘convergence’ may have an additional meaning: European technology developers seem to converge in their fear that the public might react negatively. Concern over public acceptance is one of the few common features of these highly diverse fields. Since technology developers have a fundamental interest in the prevention of non-acceptance, and since obviously there is ample experience to learn from, social scientists have been asked (mostly under the umbrella of ELSI research) to investigate the societal consequences of and discourses over technologies and thus find out what went wrong with biotechnology in the past and what should be done in the future to avoid similar developments.

Predictions of consequences from technologies are social constructs by their very nature and thus subject to debate. The history of such technology debates shows that there is no universal trigger for discontent (Bauer 1995); rather, some issues might render a technology more prone to criticism. Various types of risk carry different potentials to influence public perceptions (Slovic 1987). A particularly important source of concern is a potential health risk. Most frightening is it, for example, if the source of a risk is both difficult to contain and invisible, such as with radiation or ‘genes’, and if people cannot avoid it since the cause cannot be smelled, seen or heard. Particularly disturbing are differing expert opinions on the magnitude, impact or comparator of a risk, and whether or not it is entirely new. These different accounts often go with alleged interests of the experts involved in the assessments or of those they speak on behalf of. Another factor is benefit distribution—if it is perceived being skewed, the technology gets scrutinised. With agricultural biotechnology for example, consumer risks were attributed to modes of production that only benefited the producers, while economic arguments emphasising increases in competitiveness turned out not to be persuasive (Torgersen et al. 2002). If the prospects were displayed to be extremely promising, any suspicion of a hidden risk for human health and the environment was taken up with particular scrutiny (Bauer and Gaskell 2002).

Despite providing some insights into their mechanisms, experiences so far have shown that controversies and their political consequences arise upon local contingencies (Bernauer and Meins 2003) and thus remain little predictable. As a consequence, they can be considered unavoidable, which means that any attempt at preventing them pro-actively may be futile.

Nanotechnology, for example

Assessing the possibility of a future conflict over a novel technology nevertheless is tempting. One of the first questions is what to compare synthetic biology with. Agricultural biotechnology suggests itself as the proverbial bone of contention, but its single-issue character and the close link to food renders it quite different. In contrast, nanotechnology is even broader in its technological basis and range of applications than synthetic biology. In fact, the term only provides a rhetorical umbrella over a bundle of technologies that deliberately handle matter on a very small scale (Schmid 2008). Potential applications are so variegated that any generalised statement on risks or benefits seems out of scope. Despite technical links, comparing nanotechnology to synthetic biology on the basis of their intrinsic properties is therefore not very sensible. However, they both belong to the set of converging technologies in the above understanding, as they are novel, assumed to become key enabling technologies and to provoke concerns regarding public acceptance.

Nanotechnology as a term is more common than synthetic biology without having acquired a clear status yet. Grunwald and Fleischer (2007) identified four areas of possible discourses: apart from ‘classical’ risk for human health and the environment from materials (e.g. nano-particles) there are more speculative debates over the potential for ‘disruptive’ innovations (e.g. nanobots), a number of generic issues from enabling applications in different fields (e.g. privacy and RFID), and broader governance issues (e.g. trust and accountability) because nanotechnology might be considered a ‘risky’ technology. In public debateFootnote 4 so far, nano-particles were rhetorically taken for the entire technology. Similar to biotechnology, health risk and governance issues gained most prominence here.

Several of the ‘contentious’ characteristics as identified above can also be attributed to nano-particles. Experts assert that there may be risks not yet investigated, but their significance remains unclear. Apart from uncertainty over risks for human health there is even more uncertainty over environmental impacts in the long run (Colvin 2003). As a consequence, insurance companies had initially denied coverage. Part of their problem was that it was unclear what to compare nano-particles with, and which measures would be adequate to contain potential risks (Swiss Re 2004). Although some progress has been made, there is still no conclusive assessment. With respect to the distribution of benefits, consumers may take advantage of some materials, while some others offer more opportunities for streamlining production processes without the consumer benefiting from it. In addition, there was an overselling of future benefits (Schmid 2008).

After 2000, some CSOsFootnote 5 began to address nano-particles. The Canadian ETC group (mostly dealing with agricultural biotechnology issues) started a campaign on uncertain environmental and health effects. Considering the experience with biotechnology, technology developers imagined public opposition, particularly if ‘something happened’, i.e. a major incident occurred that could be attributed to artificial nano-particles. Consequently, nanotechnology became a playground for attempts to address future public opposition. Under the header of ‘what can we learn’ a main conclusion was to advocate research on health risks from nano-particles (European Commission 2005; Maynard 2006) and their environmental properties. This should contribute to a credible risk assessment and management not only to prevent harm but also to contain outrage in case ‘something happened’. Developers and authorities would be able to claim that they had acted responsibly. Apart from the protection against harm, this responsibility argument was a main reason for research into risks from nano-particles (DEFRA 2005).

Irrespective of the (ir)reality of a health risk,Footnote 6 the fear that the public might turn hostile to nanotechnology does not seem to be really imminent, though. Technology developers have been using the suffix as a marketing asset even for products without ‘nano’, which shows that the term conveys a positive image indicating the latest technological achievements in very different products. This image is not subject to a rational debate over the pros and cons; rather, it emerges from, and addresses, the fragmented perceptions in the public. The positive image is quite robust: in spring 2006, a German company ran into troubles with a household cleaning spray baptised ‘magic nano’ (not containing nano-particles). Consumers who accidentally inhaled the spray had to be hospitalised (Giftinformationszentrum Nord 2006). This was the sort of incidence technology developers feared regardless of the cause. However, the German media were less interested than those in the US and UK. Even before it was clear that there were no nano-particles CSOs did not take up the issue. If genetically modified organisms had been (said to be) involved, the outcome might have been quite different. Obviously, Germans did not seem to easily take fright at nanotechnology, but this was not a result of a particularly precautious way of introducing it. Consumer products containing nano-particles had been put on the market without any measures of precaution. The technology had been deployed through the back door as in many other cases, and nobody had cared.

Factors influencing the debate

This puzzled some observers, but upon closer inspection a number of reasons emerge why nanotechnology, or nano-particles in this case, might have fared better in the publics’ mind than agricultural biotechnology. Compared to the 1990s, a shift in problem attention could have lead to a general decline in the salience of environmental and technology issues over recent years (Eurobarometer 2005). One explanation frequently given is that pressure on the individual towards higher performance made people worry over other things. Another more convincing argument would be that the interest in environmental issues has been redirected to the more pressing issue of climate change. Although general attitudes towards contested technologies such as genetically modified food have not substantially changed over the years (Gaskell et al. 2006), extending these attitudes to a new item would require re-igniting past discourses on technological risk while other issues were to the fore.

Secondly, the technology sector might be more careful in marketing novel food products when they feel that acceptance is unsure. For non-food products from nanotechnology already on the market, a lack of acceptance has obviously not been considered in the light of the then positive image of ‘nano’. It is indicative to see that in the meantime companies, upon request, are very reluctant in saying whether some of their products contain nano-particles (A. Gazsó, pers. comm.). Apart from commercial secrecy over formulations this can be interpreted as an indication that they have become nervous.

Thirdly, decision takers in many European countries might have reacted to the experiences with food controversies. They adopted new ways of reconciling demands from different actors in the presence of uncertainty over risks. Under the header of ‘governance’, they devised measures (rhetorically) incorporating stakeholders in the decision-making process and rendering them co-responsible for the outcome. The EU strategy on science and society (European Commission 2001) showed that at least talking over governance is considered important. In the same vein, an increasing number of scientists seem to embrace the need to consider ethical, legal and social issues linked to the subject of their research.

Fourthly, since top-down PR approaches or ‘rational’ exercises in public understanding of science and technology (PUS) have rendered little effect in terms of acceptance for contested technologies (Dierkes and von Grote 2000), more open, two-way public debates have officially been recommended as a prerequisite for enhancing the social embedding of a technology (European Commission 2004). Consequently, a frequently heard proposition was to enhance public debate over novel technologies such as nanotechnology (Meili 2006).

A public debate, however, is not easily elicited over something that is hard to understand and has rendered few products on the market. Experiments have shown that in debates, people are interested in—even potential—risks and benefits if they appear salient to them (Wagner and Kronberger 2006). To induce a fruitful discussion a debate must therefore be free to address whatever the participants think is relevant, including risks but also interests or responsibilities of actors. This may have little to do with a risk being scientifically plausible or not. Triggering a ‘rational’ public debate on scientifically implausible risks is an oxymoron—what is salient and worth debating from a public point of view is often held to be implausible hence irrelevant from a scientific standpoint. In addition, if any negative aspects would come to the light, a public debate could stain an initially positive image of a technology. With nanotechnology, there are more concerns about nano-particles among scientists and technology developers than among the public (Scheufele et al. 2007), and they realistically fear blame on the technology emerging in a public debate even if ‘nothing happens’.

Synthetic biology: the next wave?

According to the Synthetic Biology Community homepage, synthetic biology aims at “the design and construction of new biological parts, devices, and systems, and the re-design of existing, natural biological systems for useful purposes”.Footnote 7 This leaves traditional biotechnology far behind in scope; genetic engineering appears as a handicraft in comparison. Synthetic biology promises to lay the foundation of a new industry not unlike microelectronics decades ago (Endy 2005) or, at least, it will be a significant part of the bio-economy to come (OECD 2009). Hence, the promises are not short of those made for nanotechnology.

Although few lay people have heard about it (Hart 2008), aims such as the construction of entire new genomes, new types of organisms or artificial forms of life with new genetic elements could trigger a lay publics’ suspicion of scientists having gone mad. Early on, the ETC Group took up the issue. Their first report on synthetic biology called the new approach ‘extreme genetic engineering’ or ‘GMOs on steroids’ (ETC Group 2007). The slogan alluded to old controversies over GM food and hormone (mis)use.

The scientific community dealt with this challenge by emulating approaches to mitigate risks from genetic engineering decades ago. The allusion to the Asilomar conferences and the NIH guidelines in the 1970s was no coincidence; the motto was self-governance by scientists rather than state action. This was a foreseeable trigger for critics. In 2006, 38 CSOs signed the ETC Group’s open letter demanding a societal debate on socioeconomic, security, health, environmental and human rights implications. The second annual conference on synthetic biology in 2006 in California addressed possible societal implications from synthetic biology more prominently, issuing a resolution on biosecurity and biosafety. Scientists called for more prudence and for anticipating potential risks and public unease (Maurer et al. 2006), but they abstained from addressing broader political and socioeconomic issues. In the following, CSOs repeatedly attempted to enlarge the view while scientists successfully kept the focus on a restricted range of issues around biohazards.

Other than in Asilomar, and much in line with contemporary issues in US mainstream discourses, most concerns related to biosecurity. Participants focussed on measures to prevent potential intentional misuse of research results for sinister aims and, especially, terrorism. Adequate measures, accordingly, were self-control of the scientists and engineers involved as well as in the surveillance of research laboratories and companies supplying DNA building blocks. Apart from a screening for ‘dangerous’ DNA sequences and watch-lists for companies and individuals, the recommendations included a professional obligation to confidentially report ‘dangerous behaviour’ of colleagues, a clearinghouse and more security research. The move for self-regulation to prevent terrorist attacks was intended to pre-empt US Government intervention (Check 2006) and inevitably entailed secrecy and suspicion among colleagues, extending the practice in biological warfare research to civilian issues. In a way, the resolution appeared to be a brainchild of mid-decade US preoccupations.

Initially in most European member states, synthetic biology and its implications elicited rather little interest on a national level while the EU research policy took up the issue (NEST 2005) and launched several projects not only on scientific but also on ethical, legal and social issues.Footnote 8 In contrast to the US view, many scientists considered the prevention of risks from unanticipated consequences to be equally relevant (Schmidt 2006). With notable exceptions (Church 2005), leading US scientists had attributed pertaining concerns to the ‘usual European scare-mongering’ (Schmidt, pers. comm.), while some Europeans had diagnosed ‘terrorism paranoia’ in the US.

At the third annual conference in Zurich in 2007,Footnote 9 societal aspects of synthetic biology including intellectual property rights and ethics gained more prominence and provided a (limited) stage for CSO views. The next conference in 2008 in Hong KongFootnote 10 followed along these lines, with the ETC Group organising a session on global societal impacts, inviting speakers from outside the scientific community to voice their concerns. Despite their primary dedication to scientific and technical issues the SB 3.0 and 4.0 conferences provided some opportunities to address broader issues than safety and security such as distributional equity and different views of a desirable future.

In the meantime, a number of institutions dealing with policy analysis and research into ELSI such as The Woodrow Wilson InstituteFootnote 11 in the US or the Rathenau InstituutFootnote 12 in the Netherlands had taken up the issue. Over time, national (Balmer and Martin 2008) and international research organisations (NEST 2005) and other scientific bodies (IRGC 2008) joined. The ‘Human Practices’ Thrust of SynBERC in the US tried to integrate research on societal aspects into a scientific-technical project in a novel way.Footnote 13 The Synthetic Society Working Group considers itself “a group of individuals who are working to directly address societal issues embedded and surrounding the emerging field of synthetic biology”.Footnote 14 By 2006, synthetic biology had arrived on the radar screen of technology assessment and the social studies of science and technology as a proverbial example of converging technologies, alluding to the implications for a new technology race. Immediately, the task was set to measure its potential for raising concerns among the general public.

Possible topics of debate

For those reminding the biotechnology controversy synthetic biology provided certain aspects for public concern. The Rathenau Instituut (de Vriend 2006) highlighted a number of arguments in an effort to early identify future issues of debate. Most of them refer to problems to be dealt with on an expert level, such as biosafety, biosecurity, intellectual property rights or particular ethical aspects. A pertinent question depending on the definition is whether synthetic biology is something new or a mere extension of genetic engineering with more powerful tools (IRGC 2008), implying that existing regulation and methods of risk assessment with conventional criteria (properties of ‘donor’ and ‘acceptor’ organisms) are sufficient. Some voices warned that this might fail to properly establish safety due to the greater possibilities of synthetic biology (Rodemeyer 2009, Schmidt 2009). Currently, most members of the scientific community seem to consider existing rules still to be adequate and assessment criteria applicable (M. Schmidt, based on a series of interviews).Footnote 15 However, as with any rapidly evolving technology, the question is how long the current regulatory toolbox will prove to be applicable and sufficient. Regulatory amendments will probably become necessary, but when this will be—in five, ten or more years—remains a matter of dispute.Footnote 16 While the technical problems of criteria and methodology will have to be discussed on an expert level, the implications of uncertainty over risks (alleged or not) may have repercussions with a critical public.

Looking upon synthetic biology as a mere extension of genetic engineering could provide a hackneyed but easy anchor point for public attitudes. Preliminary results from media analysis (Seiringer and Cserer, this issue) and focus group research (Kronberger et al. 2009) in Austria—where the public have been, and still are, rather hostile to agricultural biotechnology—show that both journalists and lay people tend to perceive synthetic biology as fulfilling promises they already had ascribed to conventional genetic engineering. In other words, the exciting possibilities researchers in synthetic biology keep stressing already are in the public minds somehow, and the new technology only sets out to fill in existing beliefs. This may indicate a prolongation of the old debate on biotechnology; however, it could also open up another dimension: if the public considered, falsely or not, the achievements of synthetic biology not to be new, novel risks and points to criticise would go little noticed because they would be subsumed under the old paradigm—synthetic biology would appear to be old wine in new bottles. Ironically, this may be a reason why a new controversy will be less likely to arise—about genetically modified organisms everything has been said already and there would be little interest in a new debate. For CSOs, campaigning on it would not raise additional interest beyond general biotech issues. And if attitudes would turn out to grow just slightly more positive in Europe as the last Eurobarometer survey provides some indications for (Gaskell et al. 2006), then this would probably also pertain to synthetic biology.

If, in contrast, synthetic biology is going to be viewed as novel, two sets of problem framings come into the picture (Schmidt et al. 2008). On the one hand, supported by work such as the successful re-construction of an ancient flu virus (Sharp 2005), the potential to cause harm might be considered much higher than with ‘old’ biotechnology. The consequence not only would be that we needed more surveillance of and awareness by scientists in order to ensure biosecurity (Kelle 2007). It also could trigger a novel frame of synthetic biology being an issue of future warfare and terrorism and, hence, as a technology inherently evil. Whether such an image could be weighed up against the advantages of beneficial applications in medicine and energy production remains questionable. On the other hand, the opportunity to ‘create artificial life’ or a ‘second genesis’ (as the wording was in a newspaper interview with leading scientists in synthetic biology)Footnote 17 may trigger ethical objections. The example of stem cell research has shown that ethical objections are by no means an academic issue only; rather, if they tap into strong religious convictions, societal dynamics can be generated that can halt a technology.

In addition, differences between a North American and Continental European understanding of the role of science in society may affect attitudes towards synthetic biology.Footnote 18 Since US scientists dominate the field, practices and attitudes as emerging, for example, from the 2006 conference in California might sound alarming to European ears. The deliberate restriction to self-regulation as the acceptable way of dealing with potential problems may be normal in the US. In Europe, it may be taken as a concretisation of a ‘keep-it-secret-and-leave-it-to-the-experts’ approach. In previous technology debates, secretiveness and expert dominance have been suspected to enhance existing public suspicion (Wynne 2001). Furthermore, the propensity of some US scientists to neglect possible unintended consequences may puzzle those that hold deer the precautionary principle. The argument that no risks could be demonstrated with genetic engineering has turned out less convincing for a European public than for its North American counterpart. Finally, while funding for (bio)defense research is normal in the US, it is highly contentious in many European countries. Discussions over nanotechnology have shown that military or ‘dual’ use literally is a minefield in Europe (Norwegian National Research Council 2005). The problem of basic science being ‘embedded’ in military research has since been critically addressed in the context of the NSF report on converging technologies (Nordmann 2004).

Taken together, there are opportunities for a broader public controversy over synthetic biology compared to what we have seen so far. However, this does not mean that a controversy is really pending. Apart from the reasons outlined above synthetic biology may go little noticed as an extension of genetic engineering not entailing a particular debate of its own. Compared to future perceptions on biotechnology in general synthetic biology might not fare very differently.

This does not leave the scientific community without responsibility. Many of their members have acknowledged that dealing with societal issues, anticipating potential problems and reacting to CSO activity is necessary.Footnote 19 Especially among younger researchers, societal implications of science and technology are part of what they have to deal with, not unlike performing administrative work, engaging in business activities and relating to the media—doing science has developed into a multi-task endeavour (Jasanoff 2004). With the development of novel converging technologies, today’s researchers, on average, might be more aware of possible problems than their elder peers when the biotechnology controversy set off, irrespective of different opinions on concrete issues.Footnote 20 This could be shown in a recent e-conference set up by the Synbiosafe project, which revealed that many scientists share a similar view regarding the set of problems, while they put up rather different proposals on how to deal with them (Schmidt et al. 2008).Footnote 21

The role of social scientists

Over recent years, social scientists experienced a boost in opportunities for investigating societal consequences of science and technology, accompanying major scientific endeavours such as the Human Genome Project under the header of ELSI research. Social sciences, often said to be on the verge of marginalisation, could regain importance and funding. In the beginning, programs mostly conceptualised results from science as a black-boxed input, and the impact on society as the subject of investigation. Apart from addressing societal impacts, the rationale was often seen in the identification of possible obstacles to the practical implementation of scientific results. Consequently, applicants had to make it clear that the utility for technology development of their presumed results warranted the effort and the money. Over recent years, funding applications often contained the magic phrase of ‘learning from past experiences’. This is not so different, after all, from the problem biomedical research is confronted with, where in order to acquire funds applications frequently have to emphasise, substantiated or not, utility in terms of possible new therapies.

In the case of past ELSI programs, this mission orientation had some side effects. When called upon helping deliver practical solutions to mitigate social controversies in a pre-emptive way social scientists were confronted with the implicit claim of helping engineers ‘to make biotechnology happen’ (Jasanoff 1995). For some of them it entailed being ‘embedded’ in technology development with a clear role in the fabric of innovation. At worst, they met naïve demands from some stakeholders to render technologies accepted that other stakeholders would not deem acceptable. In other words, they were expected to take sides with those whose interest it was to smoothly introduce a technology and to overcome obstacles they would trace back to negative public perceptions.

‘Being embedded’ also meant applying participatory methods for more sophisticated PR purposes. Such methods had been developed for providing an opportunity to convey the opinions of informed lay people on issues technological to the political system (Joss 1995). In some instances however, participatory events tended to get caught serving more sophisticated two-ways’ public relation purposes designed to replace useless advertising activities. Often the distinction was blurred, and even those in charge of such events might not have been fully clear over what the purpose was (Bogner and Menz 2005). The methodological set-up was similar; however, in the end it was the aim to promote the technology that determined the activity.

Attempts at instrumentalising social science met criticism, and some more recent reports on societal aspects of synthetic biology such as the paper for the BBSRC seemed to propose turning around the relation between attitudes and scientific developments. Accordingly, “scientific research must not get too far ahead of public attitudes” and public consultation should help to “negotiating the boundaries of what is socially acceptable science” (Balmer and Martin 2008, p. 5). Scientific research appeared as an endeavour independent from society producing a stream of bitter pills society might be expected to swallow until the point of non-acceptance. This left science and society detached as ever.

In more recent ELSI programs it has been acknowledged that science and society are interdependent. ‘Learning’ no longer means avoiding conflicts; rather, the new understanding comprises an acknowledgement of past mistakes with devising measures to counter negative attitudes. The emphasis has moved ‘upstream’, which means that the results from scientific investigations and technology development are no longer taken to be an invariant input; rather, it is their generation that is in the focus of interest. Thus, the interaction of natural and social scientists as well as stakeholders in identifying topics that go beyond scientific problems has become a mainstream activity. Tackling issues on a very early stage in the evolution of a technology in collaboration between technology developers, presumptive users, stakeholders and social scientists takes advantage from Constructive Technology Assessment (Rip et al. 1995) and related approaches. Such endeavours have to be built upon better insights into the mutual relation of science and the rest of society. In the US, for example, a renewed interest in investigating science-society interfaces focus on trajectories of research in their institutional contexts. Rather than trying to establish ‘consequences’ in different sectors of society from scientific research results, under the header of ‘human practices’ the contingent inputs into various streams of research are being analysed (Rabinow and Bennett 2008).

The linear model of technology development has often proved to be at odds with reality. Being involved in the process of shaping a technology entails a different role for social scientists compared with past claims to make technology happen. No longer are they ‘embedded’ in the linear trajectory of implementing a technology as given; rather, they take an active role in defining it. Thus, they are not just providing helping hands; nor are they confined to a role as passive observers. In becoming active players they have to take on their own responsibility for the technology emerging.