Introduction

Public participation in techno-scientific issues has recently gained mainstream support in Europe, in response to greater conflict around innovation and regulation of controversial technologies. STS scholars have played key roles in stimulating or organising such participation. The exercises have attracted diverse views regarding their appropriate design, roles and consequences. And they have attracted various criticisms–e.g. that participants were not representative of the public, or that the government did not make a prior commitment to follow views expressed there, or that technical aspects were separated from other issues.

Those criticisms may be descriptively accurate but imply particular benchmarks, even simplistic models of direct democracy. Together they imply that participants truly representing the public could guide government decisions–as if the government had no agenda of its own, nor a wider accountability to representative democracy. Amidst proposals for participation, there are diverse models of what would count as a democratic assessment of technology (Joss 1998: 4). According to a survey of participatory technology assessment (TA) exercises, these rarely have a demonstrable impact on political decision-making (Bütschi and Nentwich 2002). Perhaps such exercises matter in more subtle ways, which therefore need different analytical questions about democratic accountability.

For some analysts of participatory TA, at issue is ‘how to make those in charge accountable’ and thus ‘how to organise effective accountability’ (Hagendijk and Irwin 2006: 56–57). Some analysts have echoed concerns that participatory methods may not be ‘fit for purpose’, or that ‘they subvert broader democratic political processes’ (Burgess and Chilvers 2006). Participatory TA has been seen as supplementing older political forms of accountability with broader social forms. But this aim leaves open some difficult questions: ‘who is holding whom accountable, and by which means?’ (Abels 2007: 111).

As a case study for those issues, this paper focuses on agricultural biotechnology (henceforth called ‘agbiotech’), a sector which has faced extraordinary public protest and controversy in Europe. In response, many state bodies have sponsored formal participatory exercises, beyond simply access to regulatory procedures. So agbiotech provides a rich, multi-country case study.

This paper discusses the following questions:

  • How and why did state bodies sponsor participatory TA of agbiotech?

  • What aims arose in designing, managing and using those exercises?

  • As a remedy for what problems?

  • How did the process govern societal conflicts?

  • How did participants question agbiotech and government policy?

  • How did the overall process relate to the democratic accountability of representative democracy?

Overall, this paper analyses the tensions involved and their implications for state accountability. Approximately the first half surveys analytical perspectives, especially STS literature which links technological choices, ‘risk’ controversy and deliberative democracy. The survey emphasises theoretical and normative disagreements about the appropriate roles for participatory exercises.

Approximately the second half analyses national cases of public participation in assessing agbiotech, with a more detailed account of the 2003 UK case. The five national cases have been chosen on several grounds: their high-profile political importance and documentary sources for them. In particular, critical analyses have highlighted tensions and power relationships that might otherwise be ignored by a putatively ‘neutral’ account. As much as possible here, critical analyses are juxtaposed with the organisers’ own accounts, statements by participants and other relevant documents. Together these provide a basis to analyse conflicting agendas–of organisers, participants and perhaps analysts too.

My analysis remains largely dependent upon those sources for empirical information. I attended only one part of two relevant events: the public conclusion of the 1994 UK conference; and one public meeting of the 2003 UK science review (Levidow 2003). In addition, I read original reports of the French event (OPECST 1998b), though not the German or Danish ones. Overall, the available information sources should be adequate for answering the analytical questions posed above. By contrast, more information would be needed for a comparative typology of cases (as done by Hansen 2006).

Generating Legitimacy Problems

New technologies often become contentious because they embody or promote normative assumptions about the societal problems to be solved, and thus limit what would count as a solution. Such assumptions drove an EU legitimacy crisis over agbiotech in particular and generated a search for ‘governance’ solutions.

Biotechnological Determinism

Since the 1980s technological innovation has been driven increasingly by pressures for global economic competition. Public-sector institutes have been made more dependent upon profit motives and private finance, thus blurring the distinction between private and public interests. In European agri-food research, R&D priorities have shifted towards knowledge for profitable commodities and royalties on patents. This shift has undermined the capacity to serve the public good through innovation and independent expert advice (Levidow et al. 2002).

Such warnings came from leading members of EU-level scientific committees (James et al. 1999). When appointing scientific advisory committees, governments have encountered greater difficulty in recruiting members independent of private interests, in at least two senses. First, even if scientists are not employed by industry, their careers and institutes are often dependent upon industry contracts. Second, their individual standing depends upon gaining such contracts through competitive tenders, conducting prestigious research, publishing high-quality papers, etc. Consequently, few are willing to serve on advisory committees. Given these pressures, ‘it will prove increasingly difficult to recruit top-flight scientists’, and members tend ‘to consider themselves operating in a consultancy mode’, therefore requiring substantial remuneration (ibid: 66).

Moreover, EU innovation policies have invoked market-technological imperatives for specific technological trajectories. Since the early 1990s, the European Commission has promoted infotech and agbiotech as essential means to enhance efficiency and thus create wealth, even to ensure economic survival. In its 1993 White Paper on ‘Growth, Competitiveness, Employment’, the Commission characterised the entire agri-food industry as ‘dependent’ upon genetic modification techniques (CEC 1993).

As objective imperatives, technological determinism was linked with inexorable globalisation (cf. Barben 1998). This framework avoided accountability for its own normative choices and commitments. Under ‘risk-based regulation’, product safety according to ‘sound science’ would be the only criterion for approval decisions. More generally, ‘technological progress’ was equated with the common societal good (Levidow and Marris 2001). With such language, a neoliberal policy foreclosed alternative problem-definitions.

That policy framework was undermined by various food scandals, especially BSE (bovine spongiform encephalopathy), better known as the mad cow crisis. Critics questioned the optimistic assumptions which underlie agri-industrial efficiency, safety claims and ‘risk-based regulation’. Moreover, activists targeted agbiotech as an ominous symbol of ‘globalisation’, threats to democratic sovereignty and hazards of industrialised agriculture (Heller 2002; Levidow 2000; Murphy and Levidow 2006). Public controversy raised the stakes for ‘science’ and generated suspicion towards expert claims.

Governing Which ‘Common Problems’?

As the EU crisis illustrates, neoliberal policies leave governments vulnerable to legitimacy problems. According to a neo-Gramscian analysis of ideological hegemony, ‘economic globalisation and political change have created a crisis of the old hegemonic structures and forms of political consent, which are now coming apart…’ (Gill 1993b: 32–33). Global governance ‘can be seen as a product of two phenomena: the pursuit of neoliberal forms of globalisation, and the resistance to such centralisation of power’ (Paterson et al. 2003: 149). Governance can mean citizen involvement which enhances the accountability of representative democracy, but the term can have contrary meanings, to be elaborated next.

In mainstream policy language, governance is often understood as co-operative means to deal with common problems. It denotes ‘a continuing process through which conflicting or diverse interests may be accommodated and co-operative action may be taken’ (CGG 1995: 2). According to political scientists, governance involves social institutions ‘capable of resolving conflicts, facilitating cooperation, or, more generally, alleviating collective-action problems in a world of interdependent actors’ (Young 1994: 15).

But which ‘common problems’? Policy issues involve contending ways to define societal problems. Tensions arise between resolving a problem, on the one hand, and containing conflicts around the problem-definition, on the other. ‘Process management’ addresses this tension through wider participation, sometimes called governance (Young 1997).

Governance involves the premise ‘that a problem is “common”, in the sense that stakeholder advantage cannot be obtained–nor, often, defined–independently from collective reasoning’. Yet such advantages are often foreseen, as a basis for some stakeholders to pursue antagonistic agendas (Pellizoni 2003). Rather than accept a dominant agenda, they promote uncommon problems, i.e. contending accounts of the common good.

According to critical perspectives, governance strategies help to contain or marginalise antagonistic agendas, while undermining representative democracy. Management and ‘governance’ presuppose pacified worlds in which common aims could be defined for the good of all (Pestre 2004: 364).

As Moreau Défarges (2001) and others have suggested, the vocabulary of governance conveys the idea that the world of politics, as it was invented and has been practiced for more than two centuries, is de facto obsolete. Not only because it is based on an overly conflictual understanding of the social, but also because it relies too much on the State and the formal procedures of representative democracy…. (Pestre 2008).

In that vein, governance aims at establishing common values for the management of a collective and ultimately reconciled future: ‘The only remaining questions are procedural and managerial in nature’ (ibid.).

According to another critic, governance strategies provide a ‘discursive de-politicisation’, effectively removing societal choices from the political agenda.

The democratic public is dislodged from its position as (in principle) the ultimate judge and arbiter in the realm of “governing”; with governance, it is at best one among many stakeholders–it [the public] merits no privileged position (Goven 2006: 104).

In this strategic sense of governance, fundamental conflict can be displaced onto supposedly collective problems and solutions. Choices about societal futures can elude the formal accountability of representative democracy, even through participatory exercises.

This tension has been theorised as ‘two contrasting advocacy coalitions’. Neoliberal governance invokes ‘sound science’ for approving safe products, as a basis for consumer choices; it puts the burden of dialogue on the private sector. By contrast, participative governance accommodates cultural diversity, social fairness, and more equitable power relations; it downplays the formal electoral process, in favour of civil society (Walls et al. 2005).

Such a contrast is misleading. Yes, public participation can downplay electoral politics and thus displace representative democracy–but perhaps to make decisions less democratic. This displacement is integral to neoliberal governance. Neoliberal and participatory elements exist in tension within the same ‘governance’ process, not just as antagonistic coalitions, which provide the wider context for governance strategies.

Governing Technology as a ‘Risk’ Issue

In the European Union, ‘governance’ has become a mainstream policy term since the late 1990s. This concept intersected with general debate about a ‘legitimation gap’ or ‘democratic deficit’. According to many critics, EU policymaking eclipsed or concealed the role of national governments, while favouring influence by industry. In response, the Commission’s White Paper on European Governance set out principles of ‘good governance’, e.g. more openness and wider participation throughout all stages of the policy process (CEC 2001).

This debate responded to a legitimacy crisis of risk regulation, featuring public scandals over food and medical safety. A widespread criticism was that politicians often used expert advice to avoid responsibility for decisions. The EC White Paper acknowledged this problem:

It is often unclear who is actually deciding–experts or those with political authority. At the same time, a better-informed public increasingly questions the content and independence of the expert advice that is given (CEC 2001: 19).

Consequently, special prominence was given to the problem of ‘Science and Governance’. As these discussions recognised, official expert advice was often challenged by ‘counter-experts’ and so could not straightforwardly legitimise policy decisions:

While being increasingly relied upon, however, expertise is also increasingly contested…. ‘Traditional’ science is confronted with the ethical, environmental, health, economic and social implications of its technological applications. Scientific expertise must therefore interact and at times conflict with other types of expertise… (Liberatore 2001: 6).

As a way forward, there were proposals to democratise expertise–as in the title of the above report.

In a similar way, legitimacy problems have been attributed to governmental over-dependence upon expertise. ‘At stake here is the Enlightenment project, where objective science and representative democracy are combined to provide a new legitimation of the State’, argue De Marchi and Ravetz (1999: 754). That project was undermined by several risk crises, in their view. Even speculative hazards could undermine public trust in risk regulation. ‘Here it is the uncertainties which dominate, and which require the reference to explicit values’ (ibid: 755). From that problem-diagnosis, they advocate wider public participation as an ‘extended peer review’ of official expert judgements. However, non-governmental organization (NGO) involvement requires a somewhat ‘self-contradictory balance between their functions as critics and as stakeholders’ (ibid: 756).

Specific forms of public involvement relate to diagnoses of policy problems. A legitimacy crisis has been widely attributed to deficiencies in public attitudes. In the mid-1990s the problem was initially diagnosed as problems of public irrationality or ignorance. A later diagnosis was ‘public distrust’, attributed to institutional deficits–e.g. of risk communication about technologies, or of transparency about regulatory criteria and procedures. With this shift in deficit models, later diagnoses and remedies supplemented earlier ones, rather than simply replace them (Levidow and Marris 2001).

The need to gain or restore trust has served to justify various remedies–to educate the public, to make advisory expertise more independent, and even to make institutions more trustworthy. For the latter task, remedies have included greater transparency, broader expertise, consultation and even participation, sometimes called ‘governance’.

Such an incorporation process was foreseen in Governing Molecules, an early analysis of European conflict over biotech. As issue-framings became polarised in national debates, policymakers sought to maintain or gain hegemony ‘by re-absorbing discourses of polarity into a system of “legitimate differences” and by defining the locations where differences can be articulated’. In this way, they might absorb critics’ demands through broader expertise. The boundary between experts and non-experts became more permeable and negotiable. Controversial socio-economic issues were transformed into problems of ecological risk assessment; the former could not be legitimately debated, as industry was still embraced as a location of progress (Gottweis 1998: 319–21).

Within efforts at wider participation, different deficit models remain in competition. In parallel with ‘governance’ remedies, expert-regulatory institutions still regard public attitudes as the problem and an obstacle to technology progress. New practices encompass old assumptions. Participation is often still designed to address ‘risk’ perceptions; it aims to achieve trust and social consensus through engagement, while demonstrating objectivity through openness and transparency (Irwin 2006).

Such processes tend to reify both ‘risk’ and ‘citizens’. In participatory exercises, ‘risk’ is reified twice: by defining the universal public meaning of technological controversy as ‘risk’ issues; and by selecting particular ‘risk’ definitions as natural, objective and universal. Citizens are modelled according to specific ‘risk’ issues and definitions, while excluding others. Some participatory processes frame issues and model citizens in such ways, which are not democratically accountable (Wynne 2005). Governments tend to communicate about technology through concepts of risk and safety. Various participatory procedures can be compared according to how they address those concepts (Hansen 2006: 574). Yet the procedures may channel societal conflict into ‘risk’ language.

A recent report critically analysed efforts at public participation in risk issues. Such efforts ‘reflect a consistent and persistent under-emphasis of the ways in which risk assessment inevitably rests on normative commitments’. In so doing, they avoid questions about how risk assessment constructs society, especially in a policy context promoting ‘competitiveness of the European knowledge-economy’, and how different norms could guide a future Europe. As an alternative to risk governance, the authors propose a move to a more ‘upstream’ innovation-governance (EGSG 2007: 30, 39).

Some limitations of risk governance have been recognised by Trustnet, a European network of researcher-practitioners, including individuals from industry and government. According to the Trustnet diagnosis of past problems, risk controversies had been mis-handled in top-down ways. By simply seeking or expecting public trust, government had intensified the sources of distrust. Official emphasis on ‘science’ had led critics to promote their own experts for an advocacy science: ‘Stakeholder relations degenerate into public conflicts and endless scientific controversies’ (Trustnet Secretariat 2000).

Those prevalent approaches had reinforced adversarial processes, often focusing disagreements on risk acceptability. When there is no broadly accepted social justification of a hazardous technology, e.g. for its general societal benefits, public concerns focus upon ‘unacceptable risk’. As a remedy, therefore, ‘New patterns of risk governance are needed to provide legitimacy and promote trust’, argued the project (ibid; see also Table 1).

Table 1 Problem-diagnoses and governance roles

Funded by private donors and the European Commission, Trustnet action-research projects have sought to create more deliberative, participatory approaches to public controversies. Towards such alternatives, risk governance should involve stakeholders in defining issues. The Trustnet projects have sought ‘to re-interpret collectively the specific stakes and concerns expressed by each category of participant in order to build common goals tolerable [acceptable] for the participants as a whole’ (Dubreuil et al. 2002: 91–92).

Stakeholder involvement makes explicit the conflicting goals, scientific uncertainties and expert judgements involved in decisions. Unlike a stereotypical Top-Down approach, a ‘Mutual Trust’ approach can help participants to reach a common understanding of such issues, thus building trust. Stakeholders are involved in ‘authenticating or rebuilding the common values which nurture social trust and social cohesion’. This deliberation could answer the question, ‘Why should society take a controlled risk?’, according to Trustnet (ibid.).

In all those ways, ‘risk governance’ diagnoses deficiencies in institutional procedures, not simply in public attitudes. According to this perspective, narrow issue-framings generate societal conflicts and governmental dilemmas (see again Table 1). Towards a remedy, a governance process needs to address the fundamental sources of conflict. Later the Table will serve as a heuristic tool to analyse participatory exercises–tensions among contending ideal-normative aims, as well as tensions between them and actual practices. Let us next survey perspectives on those tensions and their sources in policy conflicts.

Democratising Technology–or Managing Conflict?

Public participation in technological issues has involved diverse agendas. According to Lars Klüver, a long-time promoter at the Danish Board of Technology, public participation has recently become mainstream, along with changes in its policy role. Originally it was promoted as a vehicle for democratisation and citizen empowerment, so that people could challenge policy assumptions and influence decisions. Now public participation goes hand-in-hand with liberalism: politics is seen as a market of opinions, so citizens should be invited into the open market (Klüver 2006, cf. 1995; cf. Popper 1962).

Participation now becomes yet another governance tool among others, e.g. for adjusting, supplementing or enhancing the policy process. Aware that they often lack public confidence, policymakers seek methods of upstream conflict-management. These professional reasons have recently driven interest by mainstream institutions in public participation and will continue to do so (Klüver 2006).

Upstream conflict-management restricts the role of participants. In the UK, for example, there have been various proposals for ‘upstream public engagement’ between the public and scientists at an early stage of technological development (e.g. HM Treasury 2004: 105). Such engagement has been advocated, as means to deliberate possible innovation choices and to make them more accountable (e.g. Wilsdon and Willis 2004). By contrast to those ambitious aims:

[public engagement] is sometimes portrayed as a way of addressing the impacts of technology–be they health, social, environmental or ethical–rather than helping to shape the trajectory of technological development. The hope is that engagement can be used to head off controversy… (Wilsdon et al. 2005: 33).

Indeed, conflict-avoidance or conflict-management may be built into the design of public engagement.

Such aims conflict with agendas to democratise technology, e.g. by enhancing the public accountability of innovation trajectories. To do so, participatory design should acknowledge that science and innovation are social, cultural and institutional activities.

As such, public engagement offers a way to be more accountable for the particular values and interests, which underpin both the governance of science and the general use of science in governance... Public engagement holds greatest value when it occurs ‘upstream’–at the earliest stages in the process of research or science-informed policy making… In practice, the relationship between representative democracy and participatory methods becomes most clear and complementary, when engagement is approached as a means to ‘open up’ the range of possible decisions, rather than as a way to close this down. Choice among the options thereby identified then becomes a clearer matter of democratic accountability (Stirling 2006: 5; cf. Stirling 2005).

Achievement of such accountability depends upon the aims, design and management of the process.

In some accounts, citizen deliberation provides a consensus-seeking process. This provides an advantage over pluralist interest-group bargaining, which abandons any sense of the common good or (by default) regards it simply as the outcome of a bargaining process. By contrast, broader deliberation of stances requires publicly defensible reasons, which are subjected to scrutiny and thus are drawn more from common interests than special ones or from manipulative language. In such an interactive process, lay participants may go beyond their prior assumptions or preferences. This practice serves ‘the goal of reaching a consensual resolution about how policymakers should manage the issue’. Moreover, ‘Deliberative democratic theory deconstructs the assumption of “given” preferences by looking for practices through which preferences are formed and how they might be changed in a consensual, democratic direction’ (Hamlett 2003: 120–121). This idealised, prescriptive account associates consensual policy advice with democracy. Such an association evades important questions: how consensus-seeking may restrict the range of ‘common’ problems, and how the process deals with divergent accounts or suppresses them.

As a more modest rationale for public engagement, it can be helpful for exploring sources of conflict: ‘the main purpose of a public debate is not to eliminate the conflict, but possibly to clarify what [the] conflict is really about’ (de Marchi 2003). As a key source of conflict, expertise implicitly pre-defines societal problems in narrow ways. So a participatory entry point is to open up those normative definitions, which do not depend upon technical knowledge (Fischer 2000: 185). At the same time, normative and empirical aspects readily become mixed, so the mixture warrants scrutiny (ibid: 19).

In technological controversies, moreover, ‘technical’ information is often disputed along several lines. There are contested boundaries between technical/social issues, expert/lay roles, etc. In participatory exercises, ‘The organisers and facilitators take on the role of “translating” the often inaccessible technical data underlying a particular issue for the nonexpert participants, in effect constructing the issue and problem [that] the deliberators are to address’ (Hamlett 2003: 127). Thus the organisers’ role can open up or close down the deliberation of expert judgements.

In technological controversies, counter-expertise has helped to demystify scientific expertise, but citizens still remain an audience. Lay people are left wondering which experts or counter-experts should be believed. Expert procedures involve normative judgements, so these can be opened up to public scrutiny, as a basis for a more democratic approach to technologies (Fischer 1999: 297–98). ‘Lay expertise’ describes many cognitive capacities to evaluate expert claims and assumptions (e.g. Wakeford 1998; Kerr et al. 1998).

In technoscientific debates, distinctions are drawn between technical and non-technical aspects of an issue; such language is often used as weapons in power struggles (ibid: 129). Research could investigate these questions: ‘On what occasions and for what purposes is the technical distinguished from the non-technical? To what extent does this distinction perform different communities?’, e.g. as expert versus merely lay actors (Grint and Woolgar 1997: 67). Conversely, how do deliberative procedures perform such distinctions? And how do the performances impose or blur lay/expert boundaries?

These performative aspects depend upon the specific setting and staging of a participatory exercise. The setting may limit what can be said with influence on the process. Participants are thereby constructed in specific roles–e.g. as protestors or as collaborators–thus performing different roles in relation to expertise. Performative interactions produce understandings of the policy problem at hand (Hajer 2005).

Considered together, the above perspectives feature theoretical and normative disagreements about the appropriate aims for participatory exercises. More subtly, practical tensions may arise in designing them and participating. Indeed, participants may perform various meanings of technology, the public, democracy and their relationship. In participatory TA exercises, there may be considerable differences between normative objectives and actual roles–which depend on their design, policy contexts and wider societal resonances (Joss 2005a: 209). Although participation could influence policy frameworks, a particular exercise may instead reflect and reinforce a policy (Sperling 2007).

As another tension or limitation, any TA exercise operates within a specific national culture. In each national context,

Democratic engagement with biotechnology was shaped and constrained by national approaches to representation, participation, and deliberation that selectively delimited who spoke for people and issues, how those issues were framed, and how far they were actively reflected upon in official processes of policymaking (Jasanoff 2005a: 287).

These national approaches bear upon the potential role of participatory TA. According to John Dryzek, consensus conferences generate ‘mini-publics’, whose democratic potential differs in each political system. In actively inclusive Denmark, mini-publics are deployed in integrative fashion, as means to incorporate broader views into policy; in exclusive France, in managerial fashion to reinforce a top-down TA; in the passively inclusive United States, as an advocacy tool, sponsored by public interest foundations. ‘If mini-publics are to contribute to deliberative democratization they need supportive structures and processes in government and the broader public sphere’, he argues (Dryzek 2006).

All the above perspectives complicate arguments for broader citizen participation, while providing a critical basis to analyse particular cases. Rather than evaluate participatory TA according to an ideal model, each case should be seen within a strategy for how to represent agbiotech, the public and the relevant expertise. The rest of this paper surveys European participatory exercises which have assessed agbiotech. They will be analysed as contradictory governance practices–constructing some societal problems as common ones, constructing the public in relation to expertise, and thus generating tensions. Through a discursive depoliticisation, contentious issues are potentially reduced or channelled into manageable ones for official expertise and regulatory procedures.

Biotechnologising Democracy: National Examples

Since the 1980s state bodies have sponsored deliberative exercises for evaluating expert claims about agbiotech in several European countries. These exercises differed greatly in several respects–their policy contexts, links to policymaking, basis for participant selection and prevalent problem-definitions. In the Danish and French cases, for example, a Parliamentary body hosted the participatory initiative in a crisis period; Parliament sought a more authoritative role in agbiotech policy at a time when government decisions were expected soon. The German and UK cases were relatively more distant from government decisions or policy debates. In most cases the participants were chosen as ordinary citizens, by adapting the Danish consensus conference model, while in the German case they were quasi-expert representatives of stakeholder groups. In all four cases the thematic focus was agbiotech–GM crops in the UK and French cases, and more specifically herbicide-tolerant crops in the German case. In all cases, some participants wanted the procedure to consider alternative options, leading to overt conflict in the German case.

Alongside those differences in context and structure, the exercises had some common features. Namely, they marginalised alternative problem-definitions that would suggest different evaluation criteria and alternative innovation trajectories. Through a search for participatory consensus, such conflicts were channelled into regulatory issues. In many ways, wider issues were reduced to characteristics of GM products, their expert evaluation and regulation. In some cases, e.g. Denmark and France, public participation helped to enhance the state’s accountability for regulatory decisions. Some advocates of participation have sought to democratise technological choices and designs, yet instead the exercises biotechnologised democracy; they internalised and reinforced assumptions about agbiotech as societal progress, while marginalising alternative options (Levidow 1998).

In such ways, the process complemented dominant policies. In a neoliberal risk–benefit framework, experts determine product safety so that consumers have a free choice to buy safe genetic fixes for agricultural problems; judgements on product benefits or relative advantages are left to market competition. Participatory exercises also complemented a Europeanisation of that policy framework. The EC Deliberate Release Directive required a risk assessment and authorisation for the commercial use of any GM product (EEC 1990, 2001). Its ‘risk’ criteria could be interpreted flexibly and eventually became more rigorous, but the EU-wide statutory procedure cannot officially consider relative benefits or alternative solutions as grounds to block a GM product. Participatory TA exercises have elaborated and internalised such ‘risk’ frameworks for agbiotech evaluation. Here follow examples from four countries in chronological order.

Denmark 1987: Sustainable Agriculture?

The ‘consensus conference’ has become well known in Denmark and beyond. Its Parliamentary technology-assessment agency, the Danish Board of Technology, was established to broaden debate and expertise on policy options. Accordingly, the Danish Board of Technology has often timed consensus conferences to coincide with Parliamentary debates on the same issue. This linkage has helped to stimulate wider public involvement, to broaden the issues, and thus to influence the overall policy debate (Klüver 1995; Joss 1998).

The consensus conference centres on a lay (non-expert) panel of ‘interested citizens’; they are selected to represent diverse views, though not necessarily to represent the overall population. The panel question and evaluate expert views, including scientists, opinion-formers and anyone whose knowledge goes beyond general knowledge (Grundahl 1995: 24). Through such scrutiny, the panel seeks to reach a consensus on practical recommendations. Minority views are reported only in rare cases where important differences remain unresolved (ibid.; Klüver 1995: 47).

For all those reasons, the Danish consensus conference has been advocated as a ‘counter-technocracy’–a means to challenge expert claims through a deliberative process. The lay panel has no vested interest different than the general public, and its report helps to promote TA as a broad societal process. It extends a Danish tradition of folkeoplysnig–people’s enlightenment through an adult education network, which builds a reflective, informed citizenry (Joss 1998: 20; DFS 2006).

As its guiding principle, ‘a well-functioning democracy requires a well-educated and engaged population’. Successful participation is understood in those terms: as a participant commented, for example, ‘We initiated a really good assessment process among the public’ (cited in Klüver 1995: 41, 43). In the Danish consensus conference, then, ‘interested citizens’ personify a political culture in which technological decisions are held accountable to public debate, mediated by Parliament.

Denmark’s debate on agricultural biotechnology was initiated in the mid-1980s by environmental NGOs. Several ‘debate booklets’ were issued by NOAH, the Danish affiliate of FoE, proposing new legislation to regulate GMO releases. In response to public concerns, a Parliamentary ‘green’ majority imposed a statutory ban in the 1986 Gene Technology Act; releases would not be permitted unless there was sufficient knowledge about the ecological consequences (Toft 1996). With this wording, the government could be held accountable to demonstrate such knowledge for risk assessment; this burden of evidence meant a de facto ban for several years.

Parliament also mandated funds for an information campaign on biotechnology. Some funds were specially earmarked for NGOs, especially NOAH and some trade unions, in order to stimulate further debate on advantages and disadvantages of biotechnology. In these ways, environmental NGOs gained extra resources and political opportunities to frame the issues for further public debate. NOAH organised ten public conferences on the wider environmental consequences, on sustainable agriculture including organic agriculture, on food labelling, on animal welfare and ethics, on the Third World, on seed diversity (including patents), and on biological warfare. These debates were reported through a series of publications and statements from NOAH.Footnote 1

In that context the Danish Board of Technology held its first consensus conference in 1987 on ‘Gene Technology in Industry and Agriculture’, timed to coincide with Parliamentary debate on the issue (Hansen et al. 1992; Klüver 1995: 44). In its report the lay panel took up risk issues as well as ethical ones (Teknologinævnet 1987). Accepting a key recommendation, Parliament voted to exclude animals from the 1987–90 national R&D programme for gene technology. The conference eventually had more profound effects on the Danish regulatory regime through wider public debate.

A further information campaign was coordinated by the Board of Technology and Danish Adult Education Association, thus extending the folkeoplysnig tradition. During 1987–1990 the Association supported more than 500 local meetings all over the country in order to stimulate debate on human and non-human uses of biotechnology, including concerns about risk and ethics. Environmental NGOs were often invited to speak, as the most visible critical actors on the scene.

The government also funded a subsequent programme, organised by trade unions, to stimulate further debate on advantages and disadvantages of agbiotech. Their educational materials posed questions about sustainable agriculture: For example, would genetically modified crops alleviate or aggravate the existing problems of crop monocultures? (Elert et al. 1991: 12). Through that wider debate, the consensus conference indirectly influenced Parliament and thus regulatory policy.

In the EU-wide regulatory procedure, dominant member states implicitly took for granted eco-efficiency benefits of herbicide-tolerant crops, while disregarding the herbicide implications or assuming them to be benign (Levidow et al. 1996, 2000). By contrast to those EU-level assumptions, Danish regulators were held accountable for assessing the broad implications of GM crops for agricultural strategy, herbicide usage and the environment. Such judgements were scrutinised by the Parliament’s Environment Committee, often by drawing upon specific questions from NGOs. Under such domestic pressures, Danish representatives in turn proposed that risk assessments evaluate those implications at the EU level (Toft 1996, 2000).

Thus citizen participation enhanced government accountability for regulatory criteria, going beyond optimistic assumptions about environmental benefits. GM crops were subjected to criteria of sustainable agriculture, which in turn were opened up to the lay expertise of agbiotech critics. Environmental NGOs found greater scope to influence regulatory procedures and expertise.

Agri-innovation choices became more contentious in the late 1990s, however; NGOs demanded alternatives to agbiotech and to intensive agricultural methods. In a 1999 consensus conference, the lay panel asserted the need for extra measures–not only for product safety, but also to prevent GM products ‘becoming controlled by monopolistic companies’, as well as measures to evaluate ethical aspects (Einsiedel et al. 2001). As the conference organisers emphasised, those proposals were expressing citizens’ viewpoints, thus providing a basis for dialogue with decision-makers (Teknologinævnet 1999). The panel’s proposals challenged some assumptions and limitations of the EU legislative framework. Yet such demands for greater accountability were being channelled into more stringent measures to regulate biophysical risks. This pervasive tension between societal demands and incorporation has analogies to later TA exercises.

Germany 1991–93: Participation Trap

In Germany various NGOs opposed all biotechnology applications since the mid-1980s, through means ranging from protests to court cases. To address such conflicts, the government sponsored a TA exercise on GM herbicide-resistant crops in the early 1990s. Funding came from the Ministry of Industry and Research, which was strongly promoting biotechnology. It was initiated and coordinated by the Berlin Wissenschaftszentrum (Science Centre) as an experiment in environmental conflict management. The 50-odd participants had quasi-expert roles; they included overt proponents and opponents of HR crops, as well as representatives of regulatory authorities, agricultural associations, consumer organisations, etc. From the start, conflict erupted over how to define the relevant scientific issues and the expertise needed to evaluate them.

A broad participation was needed to deliberate the arguments arising in the polarised public debate on agbiotech, according to the organisers. The TA was designed to evaluate those arguments for and against herbicide-resistance GM technology, especially its possible consequences–but not alternative options for weed control in agriculture. Thus the procedure was ‘a technology-induced TA, not a problem-induced TA’ (van den Daele 1995: 74).

Environmental NGOs counterposed the latter approach. They wanted the TA to compare biotechnology products with other potential weed-control methods, as alternative solutions to agricultural problems. However, the NGOs’ proposal was rejected by the organisers (Gill 1993a). Consequently, the narrow remit set difficult terms for participation by the broadly representative individuals from NGOs–indeed, terms for their expert status.

As the organisers acknowledged, ‘The TA implicitly accepted the matter-of-course development of technology as the starting point’, as well as possible risks as the main grounds for state restrictions: ‘If critics fail to provide evidence of relevant risks, the technology cannot be banned.’ So critics held the burden of evidence for any risks. Advocates held the burden to demonstrate benefits, though failure to do so would have no bearing upon regulatory decisions (van den Daele 1995: 75). This framework marginalised alternative agronomic solutions, while reinforcing the dominant system: ‘intensive farming as the reference system’. Within that framework, participants themselves defined their controversies as debates about empirical evidence, e.g. regarding the possibility of environmental damage–not about values and goals (ibid: 76, 77).

The organisers aimed to include and deliberate all viewpoints on the risk–benefit issues. By subjecting expert views to scrutiny, the TA could reach conclusions about empirical claims, rather than political or ethical ones. ‘This procedure placed participants under massive pressure either to admit consensus or justify dissent’, especially through detailed empirical evidence (ibid: 80).

From NGOs’ standpoint, the technology-induced TA framework effectively favoured experts in specialised technical areas, e.g. gene flow and herbicide effects. In practice, the TA exercise set a lower burden of evidence for demonstrating benefits than for demonstrating risks, in a period before much empirical research had been done on risk scenarios. Consequently, the discussion emphasised environmental benefits, especially the prospects for farmers to use less harmful herbicides and/or lower quantities of them (Gill 1993a).

On the basis of the expert reports, the TA symbolically normalised any risks. According to agbiotech proponents, echoing the government’s advisory body, any risks from GM herbicide-tolerant crops were similar to those from conventional crop plants and herbicide usage. ‘In many areas it was argued that there was no need for political action because the identifiable problems could be dealt with in the established registration procedures…. if one agreed to the “normalisation” of the risks’ (van den Daele 1995: 82). In this way, the exercise undermined NGO claims about novel or unknown risks; once normalised, any risks would be manageable through regulatory procedures, even contemporary ones.

The technology-induced TA framework posed a dilemma for participation by agbiotech critics. Once inside such an exercise, ‘They have to criticize a technology which promises to satisfy some needs which may even be produced by the technology itself…’ (Gill 1993a: 74). That is, putative benefits satisfy ‘needs’ which are predefined by biotechnological solutions for intensive monoculture. Thus a technology-induced TA tends to accept and reproduce the social vision built into the technology.

Environmental NGOs and their associated research institutes faced a difficult choice: either play a quasi-expert role within that framework and thus help legitimise it, or else abandon that role and be treated as merely lay voices. After much conflict, they withdrew before the TA exercise could report its conclusions. They gave several reasons for withdrawal, e.g. that their voluntary participation was occupying too much time, especially the task of commenting on long expert reports (van den Daele 1995: 81). According to an NGO expert, ‘I had not imagined that you could destroy participation by throwing paper on top of people’ (cited in Charles 2001: 107). By withdrawing from the TA, they could devote greater resources to public protest and preserve their credibility with NGO members and activists (Gill 1993a: 81–82).

After this withdrawal decision, they were criticised by the WZB coordinator:

One cannot present one’s position in public as scientifically substantiated and then cast fundamental doubt on science as neutral… Participation in the procedure implies the readiness to submit oneself on the empirical issues to the judgement of science (van den Daele 1995: 84; also 1994).

As the WZB coordinator told the story many years later, he had been sceptical of claims that herbicide-tolerant crops had special risks or special benefits, so he saw NGO arguments about risks as a proxy for political ones:

… the idea of special risks is not a good argument. We should turn to the issues of democracy and who’s going to decide how society develops… Apparently it would have been difficult for them [NGOs] to declare explicitly that the conflict was not about risks, but about social goals and political reforms… (van den Daele, cited in Charles 2001: 107).

However, that distinction was not so clearly drawn by the organisers beforehand; it became more explicit in later retelling the story. According to a social scientist who attended the TA exercise, some NGO participants saw it as analogous to a parliament which could evaluate agbiotech in terms of societal goals. However, van den Daele retrospectively portrayed it as a science court, whose remit the NGOs did not understand or accept; this portrayal offers a post hoc legitimation for the failure to integrate them (personal communication, Gill 2006).

Moreover, the distinction between a science court and parliament is not so straightforward; neither is the distinction between risk assessment and socio-political goals. At issue was the range of questions to be answered by science, their normative assumptions, and the alternative technological options to be considered as comparators for agri-environmental assessments. Some questions from participants were pre-empted or marginalised by the TA exercise, especially by constructing particular boundaries between expert and lay voices.

Societal futures were reduced to scientific issues, readily assessable by experts in ‘the state of the art’. Civil society representatives found themselves in a ‘participation trap’; they could either participate within the government’s risk–benefit framework for GM crops per se, or else be marginalised. Overall the exercise reinforced the government’s policy framework and its public unaccountability. In a similar way, societal conflict over agri-innovation issues was channelled into risk assessment through regulatory procedures.

UK 1994: Risk–Benefit Analysis

By contrast to Denmark and Germany, the UK had only a low-profile public debate on agbiotech until the late 1990s. Under the Environment Ministry, UK regulatory policy was based on a precautionary approach for protecting ‘the natural environment’. This meant ignoring or accepting so-called ‘agronomic effects’ and herbicide implications in the agricultural environment–issues which drew criticism from NGOs (Levidow and Carr 1996). Such narrow precaution remained in tension with a neoliberal risk–benefit framework: risk assessment would objectively determine product safety, as a basis for consumers to judge the benefits of approved products.

In that national policy context, before any significant public debate on agbiotech, the UK’s National Consensus Conference on Plant Biotechnology was held in 1994. Proposed by staff at London’s Science Museum, it was funded by the Biotechnology and Biological Science Research Council (BBSRC). Initially reluctant to sponsor the event, the BBSRC was persuaded by the focus on GM crops as ‘the least contentious’ area of biotechnology, especially as compared with animal biotech. Yet civil servants criticised that focus because agbiotech was not being considered in policy debate at that time (Joss 2005a: 211).

The exercise was coordinated by the Science Museum, whose staff implicitly diagnosed the problem as public misunderstanding or anxiety. The coordinators had previously obtained funds in the name of diagnosing and overcoming public unease about biotechnology. At the beginning and end of the Consensus Conference, the funders made clear their aim to enhance ‘public understanding’ of biotechnology and thus support for it. Underlying the exercise was a presumed cognitive deficit of the public.

The Consensus Conference centred upon a lay panel of relative newcomers to the biotechnology debate; they would question and learn from designated experts–whose selection was contested within the Steering Committee. Two members attempted to exclude representatives of ‘extreme’ anti-biotech groups from expert status–and thus from a list prepared by the organisers–though this effort did not prevail (Joss 2005b: 211). The organizers portrayed themselves as neutrally mediating between experts and the public. However, the exercise demarcated a boundary between ‘expert’ and ‘public-interest’ views, thus demoting the latter (Purdue 1995, 1996).

A particular lay/expert boundary was performed by expert witnesses, in the process of being questioned by the lay panel. The panel expressed views about economic, political, legal and ethical issues of agbiotech.

Yet the key questions–and the experts’ responses–were largely framed within the technocratic discourses of specialist expert knowledge… It was largely taken for granted that the task of technology assessment depended primarily upon the technical and professional skills of research scientists (Barns 1995: 203).

The structure implied that experts are needed to help overcome the deficient understanding of the public, though the lay panel often challenged the supposed neutrality of official expertise (ibid.).

[This] set up a functional division of labour: ‘lay’ people ask questions, while ‘experts’ provide the answers. Indeed to play out their ‘lay’ role properly, the ‘lay’ panel was obliged... to show appropriate deference to the ‘experts’ and the organisers. The ‘lay’ panel was thus encouraged to take on the challenge of investigating biotechnology, but from an exaggerated position of innocence and ignorance (Purdue 1995).

The whole construction of their layness induced an undue deference to the experts, irrespective of the expert’s actual level and area of competence (Purdue 1996: 533).

The lay/expert boundary was reinforced in the final, public stage of the process. There the chairman tended to give pro-biotechnology speakers the status of ‘mobile experts’, knowledgable on diverse aspects. By contrast, NGO activists were put on the defensive to demonstrate their expertise (ibid).

The process raised wide-ranging questions and disagreements, even within the Panel. Nevertheless, the organizers instructed the panellists to present a single report, permitting no minority views (Purdue 1996: 537). Consequently, some critical views were marginalised in the panel’s report, as if there were consensus on how to define risks and benefits.

Particularly marginalised were concerns about who would legitimately direct biotechnological innovation. Among themselves, panel members raised issues about who was ‘in control’–e.g. concerns about R&D priorities, environmental monitoring and accountability (Joss and Durant 1995: 82). In the panel’s report, these issues were largely reduced to safety controls and patent issues.

Having listed potential benefits and risks, the report concluded: ‘Biotechnology could change the world, but in order for it to be used effectively–maximising benefits and minimising risks–we also need to adapt economic and social structures to take account of the changes it might produce’. By contrast to government policy, the panel opposed any extension of patent rights; it also advocated mandatory labelling of GM food for the public right to choose. In particular: ‘Regulatory control in the UK is among the most stringent; however, there is still room for improvement’ (Science Museum/BBSRC 1994: 7, 14). Although questioning some pro-biotech arguments, the report reinforced a common societal problem of product safety, while adding the principle of consumer choice.

After the panel presented its final report, the document was interpreted in divergent ways. According to the organizers, ‘the lay panel has given the field of plant biotechnology its qualified support’ (Science Museum 1994: 2). However, the report could just as well be read as sceptical; it emphasised not only risks, but also predictable disadvantages of agbiotech. It also criticised inadequacies of government regulation, along lines similar to criticisms by NGOs. One excerpted the report as campaign material, entitled ‘Whose consensus?’, emphasising differences between the panel’s report and government policy (Genetics Forum 1994).

The UK exercise sought mainly to explore ‘the public understanding of science’ in Britain (Joss and Durant 1995: 76, 96, 104 n14), according to the conference organizers. They claimed ‘to adopt the Danish model of the consensus conference’, yet this aims to generate a wider societal debate that could influence the Parliament and government. The UK exercise anyway had little potential for such influence: Parliament had no relevant policy decision at that time (ibid: 99), and there was little public debate on agbiotech.

In any case, the lay panel had little means to challenge the UK risk–benefit framework, even if it had presented minority views. A more significant policy challenge was coming from the opposite direction. UK regulatory procedures then were facing deregulatory pressure from the agbiotech industry and other Ministries, amidst a Europe-wide campaign against ‘over-regulation’ (Levidow 1994). Environment Ministry officials saw the lay panel’s report as helpful for protecting their regulatory procedures and expertise from such pressure.

In all those ways, the UK Consensus Conference reinforced an expert/lay boundary within the UK’s risk–benefit policy framework. The Panel recommended regulatory adaptations to ensure that agbiotech would be kept beneficial and safe. Although individual panel members raised issues about corporate-biotechnological control over the agri-food chain, these were reduced to regulatory control measures, e.g. safety regulation and product labelling. This framework implied little scope for public participation in definitions of risk or benefit, much less in innovation priorities. Policy issues could be implicitly delegated to expert bodies through normative assumptions in their advice.

France 1998: The Benign Technocratic State

French political culture has been theorised as an elite centralised technocracy. Claiming a Parliamentary mandate to represent the general good, regulatory procedures readily exclude or marginalise dissent. This pattern was exemplified by the agbiotech sector. Molecular biologists dominated the expert advisory body, the Commission du Génie Biomoléculaire or CGB, hosted by the Ministry of Agriculture. This arrangement was complemented by confidentiality. Regulatory procedures disclosed little information prior to decisions, despite EC requirements to do so.

By 1997 French regulatory policy faced a legitimacy crisis. France had led efforts to gain EU-wide approval for GM crops, yet these were now opposed by a broad range of organizations. The Confederation Paysanne, representing farmers who elaborated a peasant identity, opposed agbiotech while counterposing ‘quality’ alternatives to industrialised agriculture (Heller 2002). An oppositional petition was signed by many prominent scientists, not necessarily anti-agbiotech, but all of them concerned about regulatory failures to develop appropriate ecological expertise and risk research (Marris 2001).

In February 1997 the Prime Minister decided not to authorise commercial cultivation of Ciba-Geigy’s Bt 176 GM maize in France, even though French regulators had led EU authorisation of the same product. This unstable policy indicated a crisis of official expertise within an elite-technocratic political culture. According to some critics, an official ‘objectivity’ too narrowly defined the relevant expertise. As an alternative approach, expert procedures would open up a scientific critique of possible options; this space would provide the expertise necessary for decisions (Roqueplo 1996: 67, my paraphrase). By incorporating counter-expertise, regulatory procedures would develop an expertise contradictoire (contradictory expertise), which would enhance democratic debate and state accountability for decisions.

Perhaps illustrating those concepts, in November 1997 the government announced a set of new measures. It would finally approve cultivation of the blocked Bt maize, while establishing a Biovigilance Committee including overt opponents of GM crops. This Committee would oversee efforts to monitor Bt insecticidal maize, including pollen flow and effects of the Bt toxin (Roy and Joly 2000). This plan was not operationalised, partly because few commercial fields were ever planted with Bt maize.

The November 1997 measures also included a plan to sponsor a consensus conference on GMOs, by reference to the Danish Model. This event was later officially called a Citizens’ Conference. As an official rationale, this event would provide ‘a new way of elaborating decisions’ and a means to implement ‘participatory democracy’, according to the Ministry of Agriculture. Yet the government never clarified the relation between the citizens’ conference and its own decision-making procedure (Marris and Joly 1999). This relation was subtly played out within the conference process, especially by defining expert roles.

From the start, the conference was designed to re-assert the benign expertise of the state, especially the Parliament, which saw itself as the only legitimate representative of the Nation. Organisation of the citizens’ conference was delegated to a Parliamentary unit, Office Parlementaire d’Évaluation des Choix Scientifiques et Technologiques (OPECST), which symbolised a political neutrality separate from the government. OPECST appointed the steering committee, which in turn decided that the panel membership should represent diverse views of ordinary citizens–rather than stakeholders in the debate. It also decided which ‘experts’–all of them scientists–would give briefings or testimony to the panel, thus framing the issues in advance (Marris and Joly 1999). The organisers saw those arrangements as necessary ‘to prepare a public debate which is not taken over by one side or the other’, i.e. to correct or avoid biases in the existing public debate (OPECST 1998a). Implicitly, such biases included anti-agbiotech NGOs on one side and Monsanto on the other side, especially from the perspective of the Left-Green Parliamentary majority.

Held in 1998, the conference included different framings of the policy problem. At the public hearings, the citizens’ panel often challenged claims by experts about risks and benefits of GM crops. According to the panel’s report, control by multinational companies could threaten farmers’ independence. Genetically altered species pose a risk of standardisation. And GM rapeseed poses known risks of uncontrolled proliferation, both through pollen and seeds. Nevertheless GM crops could bring economic benefits to European agriculture (OPECST 1998b; Boy et al. 1998). Together these arguments implied the need for national public-sector expertise in agbiotech innovation.

The panel’s recommendations focused on institutional arrangements for better managing agricultural biotechnology. Such measures included the following: greater social participation in scientific advice; public-sector research on ecological risks and agbiotech innovation; a system to ensure traceability of food derived from GM crops; and adequate labelling to inform consumer choice. ‘Until these conditions are satisfied, part of the panel believes that a moratorium would be advisable’ (ibid.). By advocating state funds for agbiotech innovation, the panel accepted the government’s problem-definition of a national technological gap whose solution requires public-funded science, presumed to be benign. The panel’s concerns about rapeseed complemented the French government’s decision to oppose approval of GM herbicide-tolerant rape, on grounds that gene flow could complicate weed control (Marris and Joly 1999).

The panel’s conclusions were translated into policy advice by the Parliamentary organisers, as if they were neutral experts in the public good. Moreover, having attended the proceedings, the OPECST President presumed to speak for the panel:

Taking all these views into account he then himself adopted a position on a number of topics… He has identified the issues and looked into peoples’ fears and concerns (OPECST 1998b).

This translation can be illustrated by the strategic issue of how to structure expert advice. The panel had proposed that a citizens’ commission should be part of the scientific advisory committee (Boy et al. 1998). Yet OPECST recommended instead that it be kept separate; this proposal could better perpetuate a neutral image of scientific advice, thus reinforcing a boundary between expert/lay roles.

The panel’s advice anticipated the general direction of government policy: more stringent regulatory criteria, risk assessment by a broader scientific expertise, and ‘independent’ risk research, which was equated with public-sector institutes. It helped to legitimise and reinforce such initiatives, which had not been universally accepted within the government beforehand. In June 1998 the government announced measures along those lines (Marris and Joly 1999). Institutional reforms emphasised expert procedures to minimise the risks and enhance the benefits of a controversial technology.

Despite its limitations, the citizens’ conference initiated a new form of active public representation and knowledge-production. Panel members explored techno-scientific and social aspects together from the perspective of ordinary citizens. They sought to inform decision-makers about the views of those who do not normally speak out–and who do not feel represented by political parties, trade unions, or environmental and consumer NGOs. This potential for participatory evaluation, especially for considering alternative options, was limited by the overall structure, especially the small opportunity to interact with designated experts (Joly et al. 2003).

Overall the citizens’ conference was used to legitimise state claims to represent the public good, especially through expert roles. OPECST selectively promoted some accounts of agbiotech and its regulation as the expert ones, while explicitly speaking on behalf of citizens. The Agriculture Ministry had claimed to implement ‘participatory democracy’, yet the exercise extended the French tradition of technocratic governance (Marris and Joly 1999).

Within this framework, expert roles remained the exclusive realm of the state authorities and their officially designated advisors. Ordinary people could question experts and recommend institutional reforms, but Parliamentary experts would officially speak for them. Thus the process reinforced lay/expert boundaries, in the face of public challenges to the official expertise for agbiotech.

EU-level NGO Participation 2001–02: Alternatives Sought

Alongside the rise of anti-biotech protest in the late 1990s, many civil society groups debated agbiotech as just one option– which could be entirely rejected. Likewise, some participatory exercises now went far beyond regulatory issues, towards alternative options for agriculture. But this broad scope conflicted with state sponsorship of participatory exercises.

Unlike previous ones, a 1998 UK Citizen Foresight panel had mainly academic funding and a low public profile. The organisers explicitly sought to develop ‘lay expertise’, thus going beyond previous models of non-experts questioning experts; for example, participants would analyse how a technology may shape their lives. Asked to evaluate agbiotech, the lay panel decided to draw comparisons with alternative agricultures. Ultimately they saw no benefits or need for agbiotech in the UK, especially given those available alternatives (Wakeford 1998). Although reflecting widespread public views, this broad-scope evaluation had no link with any policy process or state body.

Since the late 1990s EU-level bodies have sponsored stakeholder consultations on agbiotech. Participants defined the problem in divergent ways, especially regarding the criteria for agricultural research priorities and risk-assessment procedures. In 2001 a stakeholder dialogue was convened by the European Federation of Biotechnology (EFB), with funding from the European Commission. The convenors aimed to focus on ‘environmental risks and safety of GM plants’, but the discussion could not be contained within those terms. A key point from the plenary sessions was that participatory experiments need to involve the public in defining scientific research priorities; public concerns must be taken into account ‘so as to democratise decision-making processes’ (SBC 2001).

Participants disagreed over how GM crops relate to sustainable agriculture. According to some industry and government representatives, agbiotech products could facilitate Integrated Crop Management systems, but NGOs regarded them as incompatible. NGOs sought a greater knowledge-base for adequate regulation and for a different innovation trajectory:

… generation of data on the impacts of different agricultural systems would provide a context for evaluation of the impacts of GM crops, and would make it easier to judge their significance. What are the relevant agricultural systems to compare? … Options are organic, extensive/integrated, or intensive/conventional agriculture… (SBC 2001, summary of Working Group III)

In a later EFB workshop, on ‘Public Information and Public Participation’ in agbiotech regulatory issues, sharp disagreements again arose on the expert basis for regulatory procedures. Participants disagreed about whether scientific risk assessments are value laden; likewise whether environmental and health issues are also ethical issues (SBC 2002: 9). NGO views converged on these issues, while together disagreeing with industry representatives, whose arguments relegated risk issues to objective expertise. NGO arguments implied greater potential scope for public participation in regulatory procedures.

Looking beyond those procedures, moreover, NGOs proposed research on less-intensive crop-protection methods–as a more stringent comparator, and as an alternative societal choice. ‘Public participation might lead to better identification of research needs, e.g. comparison of agro-ecological consequences of conventional agriculture, IPM with/without GM crops, and organic agriculture’, according to the workshop report (ibid). In sum, environmental and consumer NGOs had quite different regulatory agendas, but they all sought ways to open up both the regulatory and innovation arenas for a broader expertise. Thus stakeholder consultation challenged the state’s commitment to agbiotech as a European imperative, as well as the expert/lay boundaries which underlay regulatory procedures.

As a potential entry point for agricultural alternatives, a comparative assessment could be done within EU regulatory procedures. Risk assessment can compare a hazardous product with alternative options that may pose lower risk, according to official guidance on the precautionary principle (CEC 2000). Some EU member states have unfavourably compared GM crops with safer or ‘sustainable’ alternatives, especially since the late 1990s.

However, EU regulatory procedures have provided no scope for such discussion. Expert advisors rejected all doubts about the safety of GM crops proposed for commercial authorisation (Levidow et al. 2005). The EU procedure invites public comments on risk assessments of specific GM products; comments must be put in such technical terms–or else be ignored (Ferretti 2007). As seen in the national examples above, special participatory exercises likewise translated and reduced wider concerns into regulatory issues for expert risk assessment. Thus the exercises largely complement the regulatory framework and reinforce its limits.

UK 2003 Public Dialogue: Constructing the Public

From the late 1990s onwards, the UK had a widespread public controversy over agbiotech. Protest actions and attacks on field trials gained public support by linking GM crops with various issues–BSE, other food scares, globalisation, ‘pollution’, etc. (Levidow 2000). The government faced an impasse over regulatory decisions, especially the criteria for permitting a GM herbicide-tolerant maize which the EU had approved in 1998. As a key issue, conservation agencies had warned that changes in herbicide usage could harm farmland biodiversity, so the government funded farm-scale trials to monitor such effects.

To address wider issues beyond risk regulation, the government had created the Agricultural and Environment Biotechnology Commission in 2000. Its report, Crops on Trial, advised the government to initiate an ‘open and inclusive process of decision-making’ within a framework that extends to broader questions than herbicide effects. It proposed a ‘wider public debate involving a series of regional discussion meetings’ (AEBC 2001: 19, 25). The government was persuaded to sponsor this–alongside the intense, sporadic debate which was occurring anyway.

Called ‘GM Nation?’, the official public debate was carried out in summer 2003. Beforehand the government vaguely promised ‘to take public opinion into account as far as possible’. The exercise was intended for the organisers to gauge public opinion, rather than for participants to deliberate a collective view on expert matters (Horlick-Jones et al. 2006). ‘GM Nation?’ also aimed to elicit views of the ordinary public, rather than organisational representatives–an artificial distinction, given that most civil society organisations and wider social networks had discussed agbiotech in previous years.

An overall Public Dialogue had a tripartite structure which explicitly distinguished between lay and expert issues. ‘GM Nation?’ was designed mainly for the lay public. An expert panel carried out a Science Review of literature relevant to risk assessment. And a government department carried out a Costs and Benefits Review of GM crop cultivation in the UK.

The Public Dialogue was designed in those three separate parts, with an explicit aim that they would work closely together. The three procedures were kept formally separate, yet the supposedly lay and expert issues became intermingled in practice. The official boundaries were both challenged and policed, thus constructing the participants in contradictory ways.

Representing Public Views?

‘GM Nation?’ featured several hundred public meetings open to anyone interested, drawing over 20,000 participants (DTI 2003). When participants in ‘GM Nation?’ largely expressed critical or sceptical views towards agbiotech, arguments ensued over whether they were ‘representative’ of the public. According to a pro-agbiotech coalition, the Agriculture and Biotechnology Council, the exercise was hijacked by anti-biotech activists, so the format was not conducive to a balanced deliberation of the issues.

According to academic analyses, however, that criticism frames the public as atomised individuals who have no prior opinion. The exercise predictably drew a specialised public which was largely suspicious or hostile to agbiotech. Participants represented both themselves as individuals and wider epistemic networks. The debates were filling an institutional void, in the absence of any other formal opportunity to deliberate the wider issues (Reynolds and Szerszynski 2006).

The government sponsors had asked the contractors to involve ‘people at the grass-roots level whose voice has not been heard’. As the official evaluators noted afterwards, however, it was problematic to distinguish clearly between ‘an activist minority’ and a ‘disengaged, grass-roots minority’. Many participants in ‘GM Nation?’ were politically engaged in the sense that their beliefs on GM issues formed part of their wider worldview. Yet policymakers tend to construct ‘the public’ as an even-handed majority–and therefore legitimately entitled to participate in engagement exercises (Horlick-Jones et al. 2004: 135, 2006). Indeed, ‘grass-roots’ conventionally means local organised activists, yet this term was strangely inverted to mean a passive, uninformed public.

As envisaged by the sponsors, separate focus groups would allow the public to frame the issues according to their own concerns, yet special measures were needed to realise the policymakers’ model of the public. They saw the open meetings as dominated by anti-biotech activists, unrepresentative of the general public. Politically inactive citizens were seen as truly representative and thus as valid sources of public opinion, by contrast to ‘activists’. To exclude the latter individuals from focus groups, candidates underwent surveillance and screening. ‘Perhaps paradoxically, the desire to allow the public to frame the discussion in their own terms led the organisers to rely on private and closely monitored forms of social interaction’. According to this ideal model of the focus groups, the organisers would be listening to the idiotis, by analogy to ancient Greek citizens too ignorant to fulfil their responsibilities (Lezaun and Soneryd 2006: 22–23). In this way, the more informed, expert citizens would be excluded from representing the public.

‘GM Nation?’ was intended to canvass all views and concerns about agbiotech, yet there were boundary disputes over issue-framings, admissible arguments and participants’ roles. Some used the opportunity as politically engaged actors in their own right, not just as indicators of public opinion. Attending shortly after the US–UK attack on Iraq, some participants drew analogies between government claims about agbiotech and about Weapons of Mass Destruction. They suspected that the government was concealing or distorting information in both cases; they wondered whether it would ignore public opinion towards agbiotech, as in the attack on Iraq. Initially the chair tried to steer the discussion back to agbiotech, on grounds that ‘GM Nation?’ was not about the Iraq war, though participants still elaborated the analogy. Thus the public consultation had a disjuncture between public politics and government policy as understood by the sponsors of the exercise (Joss 2005b: 181).

Expert/Lay Roles

For the carefully selected focus groups, the organisers commissioned ‘stimulus material’, so that participants would have a common knowledge-basis for discussion. The Steering Group asked the contractors to supply ‘objective’ information. Yet there were grounds to include ‘opposing views’ because this is often how people encounter information in real life’, according to the official evaluators of ‘GM Nation?’ The ultimate material did include divergent views, but their sources were removed from the workbook for focus groups. Afterwards the official evaluators questioned ‘the extent to which information is meaningful if it is decontextualised by stripping it from its source’ (Horlick-Jones et al. 2004: 93–94; Walls et al. 2005).

Indeed, people often make judgements on the institutional source of expert views, but they had little basis to do so in the ‘GM Nation?’ focus groups. Omission of the sources was not simply a design deficiency in the exercise. By default, the issue of expert credibility was diverted and reduced to scientific information about biophysical risk. Participants had little basis to evaluate such information, so the exercise constructed a lay/expert boundary, constraining public roles even more narrowly than in the wider public debate.

Separate from ‘GM Nation?’, the GM Science Review was officially limited to a panel of experts evaluating scientific information. At the same time, relevant NGOs were consulted about experts who could represent their views on the panel. In this way, panel members were selected along relatively inclusive lines, encompassing a wide range of views about GM crops. As these selection criteria recognised, the public did not regard scientific expertise as a neutral resource (Hansen 2006: 580), so the Panel’s public credibility would depend upon a diverse composition. Although the Panel’s report identified no specific risks, it emphasised uncertainties and knowledge-gaps important for future risk assessment of GM products (GM Science Review 2003). These uncertainties implied scope for a wider public role in expert judgements.

As a high-profile part of the GM Science Review, the Royal Society announced a meeting to ‘examine the scientific basis’ of various positions. Opening the event, the chair announced the laudable aim ‘to clarify what we know and do not know’ about potential effects of GM crops. In the morning, agro-ecological issues were analysed in a rigorous way, especially for their relevance to the prospect that broad-spectrum herbicides may be widely used in the future. But those complexities were ignored when considering GM herbicide-tolerant crops in the afternoon (Levidow 2003). By downplaying expert ignorance, the overall structure did not facilitate a debate about knowledge versus ignorance, nor provide much basis for public involvement.

Moreover, the boundaries of ‘science’ were policed along pro-biotech lines. Inconvenient issues, findings or views were deemed non-scientific. For example, speakers freely advocated the need for agbiotech to solve global problems, e.g. environmental degradation, the food supply, etc, but the chair cut off anyone who questioned these claims–for going beyond science (ibid.). Thus biotechnological framing assumptions were reinforced as ‘science’, along with the expert status of their proponents–while sceptics were marginalised as merely expressing lay views on extra-scientific issues.

In sum, the UK Public Dialogue involved a struggle over how to construct the public, especially in relation to expertise. The structure and management imposed boundaries between apolitical grassroots versus activist, as well as between lay versus expert status. Nevertheless participants challenged those boundaries, performed different models of the public and questioned dominant expert assumptions.

Conclusions: Tensions of Neoliberal Risk Governance

The Introduction posed the following questions:

  • How and why did state bodies sponsor participatory TA of agbiotech?

  • What aims arose in designing, managing and using those exercises?

  • As a remedy for what problems? How did the process govern societal conflicts?

  • How did participants question agbiotech and government policy?

  • How did the overall process relate to the democratic accountability of representative democracy?

Since at least the 1990s European governments have promoted specific technologies as if driven by objective imperatives, e.g. global economic competition, trade rules and expert claims. Epitomised by agbiotech, a neoliberal policy framework portrayed technological decisions as technical issues, pre-empted societal choices and rendered government untrustworthy. By default, representative democracy was left unaccountable for its political choices and normative commitments. According to a critical account, democratic control over biotechnology ‘was sometimes set aside in favour of other culturally sanctioned notions about what makes the exercise of power legitimate’, especially through forms of expertise (Jasanoff 2005: 272, 287). As this paper has shown, state-sponsored participatory exercises have largely complemented that displacement process, despite democratic aims by some organisers or participants.

In the late 1990s, European protest turned agbiotech into an ominous symbol of that unaccountability. A series of food scares, especially BSE, aggravated public suspicion about the industrial agri-food system. Critics linked GM products with industrial agriculture, its systemic hazards, unsustainable agriculture, commercially driven science and economic globalisation. As anti-biotech protest intensified in the late 1990s and undermined government legitimacy, policy analysis focused on public attitudes as the problem, generally understood through various ‘deficit’ models. Mainstream diagnoses initially emphasised public ignorance and irrationality towards science, while later diagnoses emphasised public distrust–in turn due to inadequate risk communication or inadequate institutional capacity to address public concerns.

Such diagnoses have informed various aims and designs for participatory TA, apart from an early aim of democratising technology (Klüver 1995; Stirling 2005). Participatory exercises have attracted diverse aims–e.g., to educate the public, to counter ‘extremist’ views, to gauge public attitudes, to guide (or reinforce) institutional reforms, and/or to manage societal conflicts. Given these conflicting aims of participatory exercises, their practical role can be known only by analysing the internal and external dynamics (cf. Joss 2005a).

Since the 1980s various state bodies have sponsored participatory exercises for evaluating expert claims about agbiotech. Alongside great differences in the four cases surveyed here, they had common features. Deliberative processes investigated the normative, value-laden basis of expert claims on ‘technical’ matters; participants found opportunities to develop lay expertise (cf. Kerr et al. 1998; Wakeford 1998). Going beyond interest-group negotiation, participants addressed the public good (cf. Hamlett 2003), thus generating conflicts over which societal problems were ‘common’ ones for deliberation–and which were to be excluded.

Participants expressed diverse views and preferences; some questioned the normative basis of agbiotech vis à vis alternative options for future agriculture. However, the search for a consensus view marginalised such issues of societal choice over innovation trajectories. Particular contexts and stagings limited what could be said with any influence on the outcome (cf. Hajer 2005), especially within a group search for a consensus view. By default, the discussions generally remained within a policy framework of minimising risks and maximising benefits of agbiotech.

In such ways, participatory exercises have generally biotechnologised democracy. They have internalised and reinforced assumptions about agbiotech as progress, albeit warranting more rigorous, publicly accountable regulation. In those ways, wider controversy was channelled into institutional arrangements for agbiotech, regulatory issues and scientific expertise as the appropriate basis to address them. Thus ‘risk’ was effectively reified as the public meaning of the agbiotech issue (cf. Wynne 2005). Citizens’ roles were modelled according to the ‘risk’ frameworks of EU and/or national legislation, almost regardless of whether the TA exercises had a close relation to the policy process. Indeed, public participation in regulatory procedures manifest tensions between the broad comments submitted and the official ‘scientific’ criteria for relevant evidence (Bora and Hauseldorf 2006; Ferretti 2007).

Likewise, participatory TA exercises had pervasive tensions between discussing a ‘common’ problem–e.g., how to make agbiotech safe or acceptable–versus containing conflicts around the problem-definition. Such tensions have taken the form of boundary conflicts: participants have contested boundaries –between policy versus scientific issues, social versus technical ones, as well as between lay versus expert roles. The exercises both contained and reproduced conflicts around those boundaries. By contesting those boundaries, some participants opened up policy issues and performed different models of the public, thus implying broader roles for citizens (cf. Hajer 2005). Sometimes the TA structure opened up lay roles along those lines, towards a lay expertise. But ultimately the process reinforced lay/expert boundaries, which took forms specific to each national case.

Such boundary conflicts erupted more starkly in some national cases. In the German TA exercise, NGO representatives could maintain their official expert status only by accepting the risk–benefit framework imposed by the organisers. Instead they demanded a comparative assessment, withdrew from the exercise and thus were relegated to the status of lay public or irrational objectors. In the 2003 UK Public Dialogue, the official structure nominally separated all relevant issues into three components–public concerns, scientific risk assessment, and economic benefits; accordingly, expert matters were formally separated from the issues for discussion by lay participants. Despite that official tripartite structure, all the issues became mixed in practice; their boundaries were both contested and policed.

Across the national cases surveyed here, such boundary conflicts can be seen as contradictions of governance strategies for incorporating and/or marginalising dissent. According to proponents of ‘risk governance’, this should have several ideal-normative roles in addressing public concerns (see Table 1, bottom row). Those roles can be turned into analytical questions about the various examples here:

  1. 1.

    Making the regulatory process more open and trustworthy for the public? In some cases, participatory TA signalled or even facilitated ways to open up the regulatory process. However, openness has been shaped according to various ‘deficit’ models of the public. The ‘public’ is actively constructed through the selection of participants and their roles in the exercises.

  2. 2.

    Making decisions more accountable for their basis in science, uncertainty and policy? Dominant structures have tended to separate biophysical risk issues from ‘political’ sources of conflict. Nevertheless, by developing lay expertise, participants have questioned the policy role and basis of scientific knowledge.

  3. 3.

    Accommodating conflicting goals and building common values? In participatory TA exercises, a regulatory focus provided a common value and thus a claim to represent the general interest; this could incorporate and/or marginalise conflicts over agri-biotechnological innovation. Tensions arose between a focus on agbiotech and demands for alternatives, amid contending accounts of ‘sustainable agriculture’.

Thus the ‘risk governance’ approach can be used as a heuristic device to analyse conflicts within participatory TA exercises. They feature a contradictory governance process–constructing particular societal problems as common ones, while constructing the public in their image. ‘Common’ problems come from a neoliberal policy framework promoting technology for economic competitiveness, e.g. for efficient agri-industrial production. Partly through a risk–benefit framework, a discursive depoliticisation reduces those contentious issues to managerial ones within an ultimately reconciled future (cf. Goven 2006; Pestre 2008).

This governance process obscures or even reinforces the dominant power to frame innovation needs as objective imperatives. Participatory TA offers ways to manage societal conflicts over technological innovation, regulation and its neoliberal framework–while awkwardly reproducing those conflicts. If analysed in this way, then public participation and engagement can ‘clarify what conflict is really about’ (de Marchi 2003).

What do these tensions mean for accountability? ‘In practice, the relationship between representative democracy and participatory methods becomes most clear and complementary when engagement is approached as a means to open up the range of possible decisions, rather than as a way to close this down’, according to a proponent of participatory TA (Stirling 2006: 5; see the longer quote above). In the European agbiotech cases surveyed here, state-sponsored participatory TA exercises often anticipated, stimulated or reinforced policy changes which enhance the state’s democratic accountability for regulatory frameworks. Such outcomes depended upon a longer-term socio-political agency beyond the TA exercise and its panel.

However, the TA exercises did not help publics to hold the state accountable for its commitment to agbiotech as an objective imperative. By closing down or subtly marginalising such issues, the exercises complemented neoliberal forms of representative democracy. Thus democratic accountability remains a task for a wider societal contest over normative commitments and pre-empted futures. The overall prospects will depend upon wider, autonomous forms of participation— neither sponsored nor welcomed by state bodies.