Introduction

Governing technological change is a key policy challenge for contemporary societies. In their broadest sense, technologies are complex social systems, comprising not only technological artefacts, but also the infrastructure, designs, standards, procedures, applications, knowledge and social arrangements specifically associated with the design and use of those artefacts (Williams and Edge 1996; Wynne 1988). Mobile phone technology, for example, involves the telephones themselves, and the towers, tariffs, texts and twitter groups that go with them. Technology profoundly influences the lives of every person in society; their behaviours, interactions, well-being and even their most basic beliefs and feelings (Gibbons and Gwin 1985; Sclove 1995). As well as providing new opportunities and freedoms, technologies create new impacts, risks and uncertainties (Hennen 1999). In our example, while mobile phones create new opportunities for communication, they also exacerbate a range of social problems—from poor spelling to stalking and identity theft (Srivastava 2005). The tasks of understanding and governing technological change require information that extends far beyond the technical aspects of individual technologies.

Technology assessment(TA) is a process that considers the societal implications of technological change in order to influence policy to improve technology governance (Gibbons and Gwin 1985; Decker and Ladikas 2004). We define technology governance as arrangements for steering or shaping technological development in line with the public interest. Governance (in contrast to government) refers to multiple actors, including government, governing a particular area (Lyall 2007) and is also used to describe supra-national governing arrangements (e.g. the European Union). These aspects are taken for granted in the concept of technology governance because decisions about science and technology have always involved multiple actors (government, industry, consumers) and have always had supra-national aspects. Technology governance has gained importance globally as society has become more aware of risk (Beck 1992). Unresolved concerns about risk create social conflict that obstructs technological development and undermines trust in science and science policy (Bruce 2002). As a prime example, public concern about genetically modified organisms (GMOs) led to a crisis in both GM research and in public confidence in the science establishment in the United Kingdom and to moratoria on GMO release and impacts on investment in GMO research in Australia (Gaskell et al. 1999; Deakin 2008). As well as a growing call for scientists and technologists (as well as political decision-makers) to take responsibility for risks associated with new technologies (Hansen 2006; Swierstra and Jelsma 2006; Russell et al. 2010), there is a push for democratic input into decision-making about science and technology (Irwin and Wynne 1996; Fischer 1999). Good technology policy, informed by TA, can potentially improve returns on investment in science and technology and lead to better social outcomes.

Unlike many OECD countries, Australia has never had a formal TA agency, although various processes that are used play a TA-like role. In this paper, we assess the quality of these TA-like activities, and their capacity to inform technology policy and governance. First, we describe the history and evolution of TA internationally, focusing on shifts from a technocratic to a more democratic approach. Then, we present a set of quality criteria derived from a review of international TA practice. Finally, we describe TA-like processes and organisations in Australia and assess them against the quality criteria we derived. The analysis is based on published reports, commentaries and discussions with key informants, as well as our participant observation of various activities. We compare the performance of various ad hoc processes with those assessments conducted by permanent organisations. We conclude that a formal TA agency would strengthen TA capacity in Australia and would lead to improved technology policy and governance and better social outcomes from technological development.

The international TA context

TA formally emerged with the United States Office of Technology Assessment (OTA), which began in 1974 to serve the United States Congress (Herdman and Jensen 1997; Hill 1997). Its objective was to inform decision-makers responsible for governing technology and managing its environmental and social impacts (van Eijndhoven 1997; Bimber and Guston 1997). The OTA was abolished by the incoming 104th US Congress in 1995, as part of a ‘broader aim of downsizing government’ (Margolis and Guston 2003: 70). During its lifetime, the OTA had considerable influence on technology policy in the United States, and the TA model spread to Europe during the 1980s and 1990s where TA organisations were set up at regional, national and supra-national levels and continue to practice and develop TA (Bimber and Guston 1997; Cruz-Castro and Sanz-Menéndez 2005; Vig and Paschen 2000).

Around the world, there is a range of different models for the institutionalisation of TA. The OTA was an example (in its time) of Parliamentary TA, where the purpose of TA was to inform Parliament (Congress) as a multiparty decision-making entity. Other contemporary examples of Parliamentary TA include the Flemish Institute of Society and Technology which informs the Flemish Parliament and STOA (standing for Science and Technology Options Assessment) which is an official agency of the European Parliament. Similarly, in countries like Australia which downplay the role of ‘parliament’ and favour a ‘government’ approach, a form of Government TA could be conceived where the role of the agency was to inform government policy reporting through the Minister of the appropriate department. However, there are many other models for the institutionalisation of TA. In some countries, TA is delegated to an independent statutory authority, and The Rathenau Institute in The Netherlands is an example of this. TA Agencies can be established in universities or other academic organisations, for example the European Academy for the Study of Consequences of Scientific and Technological Advances in Bad Neuenahr-Ahrweiler, Germany. TA can also be done within industry itself or in conjunction with industry, such as is the case within the Fraunhofer Institute for System and Innovation Research in Germany. Finally, potentially in conjunction with any of the above-mentioned forms of institutionalisation, TA can be outsourced to private consultants.

Early versions of TA aimed at providing objective ‘facts’ about technologies and their risks, impacts and benefits, with which politicians could make optimal decisions (Bereano 1997). This ‘technocratic’ approach to TA (and to decision-making) has come under increasing criticism. Given uncertainty about technology and its impacts, especially those of emerging technologies, assessments must deal with controversies, disputed claims and multiple perspectives (Gibbons and Gwin 1985; Hennen 1999; Pellizzoni 2003). In addition, assessment of risks and benefits involves competing interests and values. Good TA deals with questions of how technology and society should be—in other words, it has a normative dimension (Hennen 1999; Verbeek 2006; Grunwald 2006). TA, therefore, cannot deliver definitive answers to policy makers.

Good TA has much in common with good policy making. Brunner (2006) has highlighted the importance of ‘context-sensitive methods’ in constructing a paradigm for practice in the policy sciences. These methods are critically dependent on integrating multiple streams of information, including the views and observations of people with different perspectives, not only the views of the scientific or technocratic élites. As such, policy making is a practical craft that is informed and influenced by multiple sources of evidence, of which formal scientific- or research-based evidence is only one (Head 2008). This argues for the inherently social nature of TA and its associated policy outcomes. TA and policy sciences have common elements in needing to clarify and secure the public interest; appraise decision processes; examine social contexts; and take account of divergent perceptions and understandings (Wilshusen and Wallace 2009; Laswell 1971; Clark 2002).

Acknowledging the social nature of technology, TA and the policy-making process, European TA has taken a participatory turn, seeking to increase the legitimacy of TA by involving a wider spectrum of those affected by technology decision-making. New approaches and methods seek to involve stakeholders and the general public in assessments of technology issues and inform decision-making through interaction and dialogue, in addition to information (Decker and Ladikas 2004; Joss and Bellucci 2002; Pellizzoni 2003; Hennen 1999). The participatory turn in TA has also been influenced by a movement towards public engagement with science and technology, particularly in the United Kingdom (Wilsdon 2005; Durant 1999; Salisbury and Nicholas 2005). Previously, the prevailing science communication rationale attributed public suspicion of emerging technologies (e.g. GMOs) to a lack of understanding of the science—the so-called deficit model (Irwin and Wynne 1996; Durant 1999). Despite considerable funding for activities to educate the public, however, public concern and mistrust of science and emerging technologies deepened (Gaskell et al. 1999) particularly in the debate about GMOs and other biotechnology applications. This led to a shift in both academic and certain policy circles towards a ‘dialogue model’ of public engagement, which seeks to include the community in decision-making processes and take account of the opinions, expertise and values of all parties (Irwin 2006). Since 2000, a number of public engagement approaches and methods have been discussed and trialled, notably ‘Gene Nation?’, a UK nationwide debate on GMOs (Rowe et al. 2005; Gaskell 2004; Hagendijk and Irwin 2006).

The participatory turn has led to development of methods for engaging publics and stakeholders, and to stronger links between TA agencies and the public sphere. At the same time, there has been continuing emphasis on relationships between TA agencies and policy makers, and on the impact (or effectiveness) of TA. While impact is difficult to measure, the TAMI project, “Technology Assessment in Europe: between Method and Impact” (Decker and Ladikas 2004), has reported examples of European TA projects which have:

  • contributed to, and extended, public debate on various new technologies, e.g. open source software (Danish Board of Technology), cloning (Rathenau Institute);

  • fed into parliamentary discussion, e.g. gene technology and food (TA-Swiss), ageing society (Danish Board of Technology), communications regulation (Parliamentary Office of Science and Technology, UK); and

  • led to new legislation or policy, e.g. silicon breast implants (Scientific Technological Options Assessment, European Parliament), GM food (Flemish Institute for Science and Technology Assessment), genetic testing (Danish Board of Technology).

Quality criteria for technology assessment

We assessed practices, developments and insights from international TA by undertaking a review of available documents and through interviews with TA agency staff in Europe and the United States. From this, we developed a set of criteria by which the quality of TA could be assessed. The criteria are grouped in terms of method and impact, which are equally important (Decker and Ladikas 2004). The method criteria address the conduct of the assessment, the methods used, the interactive and participatory processes involved and how these lead to a clearer picture of the technology and its societal implications. The impact criteria relate to the impact that the assessment (the process and the results) has on policy makers and other actors. The criteria are listed below, followed by detailed descriptions.

  • Method criteria

    • Systematic: rigorous, reflexive, informed by existing theory and practice; quality controlled, involving (extended) peer review, advisory group or steering committee

    • Broad: considers a broad range of issues beyond technical and integrates multiple perspectives; transdisciplinary

    • Inclusive: participatory, deliberative, engaging, transparent

    • Resourced: adequate resources and time frames

  • Impact criteria

    • Trustworthy: reputable, independent, multipartisan

    • Influential: organisational links to decision-makers, communication strategies, access to media; leads to change in policy, opinion or action

Method criteria

TA projects should be systematic in the sense of being well planned and managed, informed by existing theory and practice, and in using established methods. There should be mechanisms for quality control including internal and external review. Internal quality review includes the personal reflexivity of the individual researchers—who should consider biases associated with their knowledge, perspectives and methods—and collective reflexivity in the form of reviews and discussions between staff of the organisation conducting the review. External quality control may take the form of peer review, review by an advisory committee or expert panel or, ideally, extended peer review (Funtowicz and Ravetz 1993).Footnote 1

TA projects should be broad in considering and integrating multiple dimensions, issues and perspectives. In particular, TA projects need to move beyond scientific and technical aspects to consider a range of social issues (Head 2008; Russell et al. 2010). While coverage of issues can never be exhaustive, particularly the full extent of the indirect social effects (Vanclay 2002); a process of scoping or ‘situation appreciation’ (Decker and Ladikas 2004: 19) is important in identifying and prioritising issues for analysis, with consideration given to the purpose and audiences for the TA. In addition to breadth in the scope of an assessment, the transdisciplinary integration of perspectives and knowledge, both between disciplines and between different stakeholders, is critical to a good assessment (Thompson Klein et al. 2001; Decker 2001). Ad hoc commissioned projects (see examples below) generally come with terms of reference which frame the assessment and potentially constrain its breadth.

Assessment processes should be inclusive, both of perspectives and of actors. Given the importance of input from diverse stakeholders, both in informing analysis of technology in context and in assessing values and concerns, participation is critically important. Participation should be characterised by: (a) inclusiveness of as many relevant voices as possible, including marginalised ones; (b) deliberation through dialogue involving reasoning and openness to opinion change (Rosenberg 2007; Hendriks et al. 2007); (c) engagement of participants through provision of adequate information, skills and opportunities to contribute; and (d) transparency of the process and how participants influenced the outcomes (Rowe and Frewer 2000). Participation processes with these features give the assessment legitimacy,Footnote 2 that is, the participants accept the process and its outcomes. Ideally, TA projects should involve ongoing communication with participants as well as audiences to inform them of the progress (including the impact) of the project.

TA projects need adequate resources and time. Determining and committing to an adequate level of resources and time are key issues for the establishment of any TA organisation or process. Adequacy can be judged in terms of the other quality criteria listed here (insufficient resources will lead to poor quality). Resourcing also covers personnel, as the quality and experience of staff is clearly a major determinant of the success of TA.

Impact criteria

The impact of TA is dependent on the quality of methods and outputs (e.g. reports) and on how the organisation conducting the TA interacts with its clients and audiences, particularly relevant policy makers (Cruz-Castro and Sanz-Menéndez 2005; Decker and Ladikas 2004). TA organisations should be trustworthy. Trust depends upon the organisation’s reputation for good assessment work and on communication. It also relates to organisations being worthy of trust, through being independent (i.e. not being subject to hidden influences or interests) and multipartisan (representing a range of perspectives and actors without favour). Trustworthiness is also related to the legitimacy that organisations gain when they are inclusive.

TA should be influential in terms of opinion formation and decision-making. This depends upon trust, but also on organisational links with decision-makers, communication strategies and access to other important players, notably the media, politicians and policy makers. This can be measured by whether the process actually leads to changes to policy, opinion or action as a result of the TA process or its outputs. This is sometimes clear when the process is directly referred to but, in most cases, it is difficult to determine given the presence of other influences. Note that there is a potential tension between influence through strong organisational links (e.g. when a TA group and decision-makers are within the same organisation or when the TA is funded by the decision maker) and trustworthiness.

An analysis of recent Australian TA-like processes and organisations

Using the quality criteria described earlier, we conducted an analysis of processes and organisations in Australia that have assessed scientific and technological developments in order to inform decision-making.Footnote 3 We selected processes that resembled TA in the following ways:

  • they considered societal implications and social issues associated with technology

  • they involved public participation or consultation

  • they had a ‘method’ element and an ‘action’ element (i.e. potentially impacting on policy, technology design or technology management).

Methods used in the analysis included a desktop review of reports, website resources and secondary sources; discussions with key actors; and our own participation in some of the processes.Footnote 4 We use two categories: ad hoc TA-like processes (such as reviews and inquiries) and TA-like organisations (government technology agencies, statutory commissions). The TA process (method) is important, but the organisation that conducts it is critical to the impact of the TA.

Ad hoc TA-like processes

In the absence of a formalised TA process, comprehensive assessments of particular technologies or technology issues have been most commonly conducted in Australia in the form of ad hoc reviews. Generally, a committee or working group, usually of high-profile experts, is set up to conduct the review, usually with the assistance of a staff or secretariat. Public consultation is usual in such reviews. Some key examples are described later.

Consensus Conference on Gene Technology in the Food Chain, 1999

In 1999, the first Australian consensus conference was convened by the Australian Museum, initiated by the Australian Consumers Association (Russell 1999; Mohr 2002). It was overseen by a steering committee of 17 experts including scientists, environmental and consumer group representatives, science and technology studies academics and government agency staff. The conference topic was ‘Gene technology in the food chain’. It was hoped that the consensus conference model would alleviate some of the distrust and alienation that had developed amongst the public (consumers), scientists and decision-makers over biotechnology (Mohr 2002).

The consensus conference model was originally developed in Denmark (Joss 1998). It involves a panel of lay people (approximately 12), selected to reflect the diversity of views of the general public (Einsiedel et al. 2001).Footnote 5 The panel has a pre-meeting at which it is educated about the topic and prepares for the conference by considering which experts to invite and what questions and issues to address. The conference, which is a public event, runs over several days and involves a selection of experts who are invited to give presentations and are then questioned by the lay panel. The lay panel, like a citizens’ jury (Smith and Wales 1999), then collectively writes a report summarising the issues and makes recommendations. Unlike a citizens’ jury, which can arrive at conclusions through voting, the lay panel must agree on a consensus position (although individuals may put forward a dissenting report). The Australian lay panel recommended that no new commercial release or unlabelled importation of GM food be allowed in Australia until a Gene Technology Office with responsibility for regulation of GMOs could be established, and a labelling system implemented.Footnote 6 It also called for a cooperative consultation process involving industry, consumer groups, critics, experts and lay people in decision-making about gene technology.

The consensus conference was systematic in applying a TA method and was overseen by a multidisciplinary committee. The range of issues dealt with was broad, and the lay panel brought a range of perspectives to the process. However, the integration of issues was limited, with expert presentations tending to be adversarial and inconsistent. The process was inclusive in reflecting a broad cross-section of Australian society, although limited by the size of the lay panel. Although there was an audience of some 100 or so, those present could only observe. The process, particularly discussions within the panel, was deliberative. The independence of the lay panel together with the oversight by the multidisciplinary steering committee potentially gave trustworthiness to the process, although this may have been affected by a lack of experience with consensus conferences in Australia, by the ad hoc nature of the process and by the short time frame in which it was developed (see Mohr 2002). While some of the recommendations were reflected in subsequent policy decisions, many of these decisions (such as the establishment of a regulatory office) were already in the pipeline. It is therefore difficult to assess the influence of the conference on decision-making, but it was presumably limited by a lack of imperative for the government or parliament to consider the report. Nonetheless, the process did attract considerable media attention, and it is likely that it had an indirect effect on politicians and government advisors. It is interesting to note that there have been no high-profile consensus conferences of this kind in Australia since then.

The ALRC–AHEC Inquiry on Human Genetic Information, 2001–2003

In 2001, the Australian Law Reform Commission (ALRC)—a permanent, independent federal statutory body—joined with the Australian Health Ethics Committee (AHEC) of the National Health and Medical Research Council (NHMRC) to conduct an inquiry into the use and protection of human genetic information. The inquiry was commissioned by the Attorney General together with the Minister for Health and Aged Care. A 22-member advisory committee was established, including experts in genetic research, molecular biology, medicine, clinical genetics, genetic counselling, community health, indigenous health, health administration and community education, insurance and actuarial practice, law, privacy and anti-discrimination.

The terms of reference of the inquiry acknowledged rapid advances in human genetic technology, as well as the breadth of contexts in which the use of genetic information may be relevant and of potential concern. The inquiry considered a range of social issues, such as workplace issues, insurance, privacy and discrimination and examined the case for a regulatory framework. A consultation process began with publication of an issues paper, followed by a call for submissions, and meetings with stakeholders and the public. Public forums involved a presentation by the joint inquiry, followed by discussion. This led to the publication of a discussion paper that quoted submissions and responded to the issues raised. This was followed by another round of submissions and meetings, culminating in a final report (Essentially Yours: The Protection of Human Genetic Information in Australia).

The report, which was tabled in Parliament in May 2003, discussed a range of issues and perspectives, drawing extensively on the consultation process. It concluded that there was significant optimism in Australia about the promised benefits of genetic science for improved diagnostics and therapies, but that there was also an underlying anxiety about the rapid pace of change, and there was a lack of capacity to regulate science effectively in the public interest. In response, the Australian Government set up the Human Genetics Advisory Committee (HGAC) as a committee of the NHMRC. The HGAC provides ongoing advice to the government on the social, ethical and legal implications of human genetics and related technologies.

The ALRC–AHEC Inquiry rates highly according to the TA quality criteria. The inquiry was not particularly informed by relevant theory or practice but was systematic by virtue of an extensive review process, by the multidisciplinary advisory committee and through an iterative consultation process. The inquiry was broad in considering and integrating a range of issues and perspectives. The consultation process gave opportunities for a range of actors and concerns to be considered, and this was done in a transparent way (Ankeny and Dodds 2008). There was, however, an absence of broad participatory deliberation and a lack of independent facilitation of public forums. The reputations of the two existing bodies that conducted the inquiry lent trustworthiness to the process, as did the multidisciplinary advisory panel. The establishment of a new agency reflects a significant influence of the inquiry on policy.

The Lockhart Review on Human Cloning and Embryo Research, 2005

An independent review of Australian human cloning and embryo research legislation (usually referred to as the ‘Lockhart Review’) was conducted in 2005, as a requirement of legislation passed in 2002 (Prohibition of Human Cloning Act 2002 and Research Involving Human Embryos Act 2002). A Legislation Review Committee appointed by the Minister for Health and Ageing was chaired by a former Federal Court judge (the Hon John Lockhart) and included a clinical ethicist, a specialist gastroenterologist who was also a community advocate, a clinical neurologist, a neuroscientist and a lawyer-ethicist, all drawn from across Australia. Their appointments were agreed to by all state and territory governments.

The committee, with the support of a secretariat, was required to provide a written report within 6 months. The report was to consider existing Acts and recommend amendments in consultation with a broad range of relevant stakeholders. The committee was informed by an independent literature review of stem cell science and other published information including surveys of public opinion. It released an issues paper; established a website; invited and received written submissions; and held face-to-face meetings, stakeholder discussion forums (run by an independent facilitator), public hearings and site visits. The public engagement aspect was informed by review of the theory and practice of public engagement in science and technology issues (Salisbury and Nicholas 2005) but was constrained by its tight time frame.

The final report included a scientific assessment of the technologies linked to an assessment of the social and ethical considerations. A detailed analysis of public submissions and hearings was included for each topic (including extensive quotations from the submissions). The committee acknowledged the complexity of assessing community attitudes in a society with diverse perspectives, interest and values, particularly when views are polarised. Their rationale was to look for matters on which the community generally agreed, instead of focusing on disagreements. For example, there was widespread agreement that some practices that were prohibited by the 2002 legislation should continue to be prohibited (e.g. cloning a human being, placing a human embryo in the body of an animal and vice versa). The report made 54 recommendations and stimulated political debate, leading ultimately to tabling of a private member’s bill that incorporated the recommendations. Despite its contentious nature, in 2006 the bill was passed in both houses of parliament on a conscience vote.

The Lockhart Review performed well against the quality criteria. The review was systematic to the extent that it was informed by literature reviews and international practice, although it had no formal quality control process. Its treatment of issues and perspectives was broad and integrated. The process was inclusive to the extent that it was open to public participation, including facilitated forums, and considered a range of perspectives in a transparent way. The committee process was deliberative in considering the issues and the diverse perspectives of the public (Skene et al. 2008). However, broad participatory deliberation was limited, and the participatory aspect constrained by the short time frame. Some commentators have suggested that expert groups were over-represented in the hearings and submissions (Ankeny and Dodds 2008). The committee was expert, broadly multidisciplinary and selected by a multipartisan process, and therefore trustworthy, although as an ad hoc committee, as a group, it lacked reputation and experience in assessments of this kind. The process had clear impact given its direct influence on policy through the subsequent passage of a bill incorporating the recommendations.

The Uranium Mining, Processing and Nuclear Energy Review (UMPNER), 2006

The Uranium Mining Processing and Nuclear Energy Review (UMPNER) was commissioned in 2006 by the Howard Government as ‘an objective, scientific and comprehensive review of uranium mining … and the contribution of nuclear energy in Australia in the longer term’ (Commonwealth of Australia 2006).Footnote 7 The six-member taskforce included as chair, Dr Ziggy Switkowski, a former chief executive of Telstra Corporation, who was appointed for his commercial and managerial experience as well as technical and scientific skills as a nuclear physicist. Three other taskforce members had worked in nuclear physics; the remaining two members had experience in economics and engineering. A seven-person expert panel was also appointed to review the scientific aspects of the review, chaired by the Chief Scientist, Dr Jim Peacock.

The taskforce had only 6 months to complete the review and report on economic, environmental, health, safety and nuclear proliferation issues. This did not allow time for extensive community consultation. In addition to four commissioned expert studies, the taskforce was informed by public submissions (June–August 2006), consultations with individuals and organisations (generally involved in the nuclear industry), and visits to relevant facilities. A draft report was reviewed by the Chief Scientist and the expert panel; with the final report being released in late December 2006. The findings were accepted by the government and formed the basis of its policy on this issue.

The final report presents the taskforce’s conclusions without any discussion of the range of viewpoints, nor any direct reference to, or quotations from, the submissions. Some areas of the review are almost entirely scientific and technical, while social and political aspects are discussed without referring to submissions. The review positioned itself purely as a ‘factual base’ for decision-making, although the report suggested that ‘Australia faces a social decision’ about whether nuclear energy should be part of the mix of power generation (Commonwealth of Australia 2006: 11). Despite this statement, recommendations are put forward to support the expansion of the Australian uranium mining and export industry.

The UMPNER review rates poorly against the quality criteria due to its narrow and technocratic approach. While the review looked systematically at technical issues, and included a quality control process (the scientific expert panel), it lacked a social understanding of technology. This resulted in a very narrow framing of the issue. The process lacked inclusiveness, both of actors and issues, with public participation limited to submissions that were not dealt with transparently in the report. The selection of the taskforce was clearly political, favouring the government’s pre-existing stance on the topic and was also narrow in terms of expertise, making it untrustworthy in TA terms. The influence of the process is unclear given that it reinforced the existing position of the government and given that the Howard Government lost office in the 2007 election.

TA-like organisations

Government technology agencies

When emerging areas of technology generate public debate and concern, or promise substantial benefits, government may allocate resources and establish new structures to deal with them. Such has been the case for biotechnology and nanotechnology in Australia. These ‘technology agencies’ differ from TA agencies in that they tend to conduct activities to promote and coordinate technological development, although they may also conduct assessment activities, as discussed later.

Biotechnology Australia operated from 1999 to 2008 as a ‘one-stop shop’ to address the non-regulatory aspects of biotechnology governance in Australia. Established as an independent agency reporting to five relevant government departments, and housed within the then Department of Industry, Science and Resources (DISR), Biotechnology Australia was responsible for managing the National Biotechnology Strategy, liaising between and supporting different government entities with interests in biotechnology, administering biotechnology-related schemes such as the Biotechnology Innovation Fund, and raising community awareness. Biotechnology Australia was established to help Australia ‘capture biotechnology benefits’. This positioning changed over its 9 years of operation, with the industry support functions being transferred into DISR early on, and external events, such as the 2003–2004 state moratoria on GM crops, having an impact on Biotechnology Australia’s strategic directions. Dr Craig Cormick, the Public Awareness Manager for Biotechnology Australia, actively drew on the international discourse about the deficit model of public awareness and emphasised the importance of learning about community attitudes in making decisions. Despite this, it was difficult for Biotechnology Australia to move away from a role, or at least a perception, of advocating for biotechnology, and its activities, although contributing important research on public attitudes, continued to focus on community awareness rather than engagement. Biotechnology Australia provided educational materials and factsheets for schools and community groups and organised public events in rural and urban centres. Biotechnology Australia’s funding ceased in 2008, and it was discontinued. Instead, renewed investment was made in the emerging area of nanotechnology through the National Nanotechnology Strategy and the Australian Office of Nanotechnology which began the year before and which took over some of Biotechnology Australia’s activities and personnel.

Biotechnology Australia conducted some TA-like activities, but these were not systematic, broad assessments. Assessment activities tended to have an expert-based technical focus. For example, a ‘Biofutures forum’ in 2007 brought together 100 people to hear 19 panellists discuss how biotechnologies could be used to address future issues such as fuel and food shortages, pandemics and climate change. Sixteen of the panellists were scientific or industry experts, and the remaining three were experts on community attitudes, ethics and biotechnology policy. The forum focused on scientific developments, but also described conditions that would improve Australian biotechnology capacity and governance, such as more public engagement.Footnote 8 The forum did not attempt to include public engagement or public input. Public engagement events tended to focus on assessing public opinion and increasing public awareness of biotechnology, rather than on assessing biotechnology developments in order to inform policy. However, this research on public opinion contradicted various assumptions about public views on biotechnology and had important impacts on decision-making (including about research directions) and policy (e.g. in informing the Lockhart Review and policy stemming from it).

In 2007, the Australian Office for Nanotechnology (AON) was established as a part of the Department of Innovation, Industry, Science and Research, but working across other Australian Government departments. AON was responsible for implementing the National Nanotechnology Strategy. In 2009, this strategy and AON were replaced by the National Enabling Technologies Strategy (NETS) following a review of the Australian innovation system. The challenge for nanotechnology policy is to learn from the insights gained from the biotechnology experience (Einsiedel and Goldenberg 2004; Joly and Rip 2007; Kyle and Dodds 2009), with the hope that nanotechnology development will not be plagued by the same social conflict and debate. During 2007–2008, AON organised a series of public forums around the country investigating a range of technical and non-technical topics, and in December 2008, a workshop on social inclusion and community engagement was held in Canberra. The workshop, which adopted a participatory approach with an independent facilitator, brought together technical and non-technical experts, representatives of community groups and lay participants. It developed recommendations for public and stakeholder engagement in the ongoing activities of the AON.Footnote 9

AON rates more highly than Biotechnology Australia on the TA quality criteria, but its activities so far have varied, do not follow a rigorous and standardised approach, and have not included quality control. They are therefore not systematic in a TA sense, although they are informed by a social understanding of technology, which has contributed to increasing breadth in the issues and actors they seek to consider. Recent activities have used more inclusive approaches. Technology offices, as stable organisations with significant resources, are able to build their reputation for assessment, potentially giving them increased trustworthiness. There may, however, be tension associated with their multiple and potentially conflicting roles (national technology strategy, community awareness, TA), which may reduce trustworthiness. They potentially have good access to policy makers, as well as to stakeholders and the public, and may therefore have considerable influence, although this is as yet relatively unproven.

Statutory commissions

In addition to ad hoc reviews and inquiries, Australia has a history of statutory commissions established to provide information and policy advice to government on various broad areas. While some of these have been short lived, the usual intention is that they will conduct a number of studies, building their expertise in the area, and have an ongoing role. They are created by, and report to, government, but operate at arm’s length. As this is a potential model for a TA organisation, we describe several such commissions. Note that the ALRC (discussed earlier) is another such statutory commission.

The Commission for the Future (CFF) was established by the federal government in 1985 to assess future issues, particularly relating to science and technology. It had a director, support staff and a board; conducted a range of activities and projects; and published a range of documents, from reports to brochures, and a periodical (In Future, later renamed 21C). While it aimed at influencing policy, it did not report directly to government. It had a major role in raising public awareness. A notable success was a conference on the greenhouse effect in 1988, which played an agenda-setting role in Australia and was awarded an OECD Global 500 award. The CFF ceased in 1998, its demise associated with organisational failings and internal crises, and to political obstacles (Slaughter 1999).

Although the CFF’s mission was extremely broad and arguably unfocused (Slaughter 1992), issues of science and technology were at its core. Goals articulated by the first director included promoting a wider understanding of science and technology and their importance, stimulating greater awareness and discussion of the social and economic effects of scientific and technological development, increasing public involvement in setting directions for research and development, and strengthening the ability of individuals to take account of technological change in decision-making about the future (Slaughter 1992). In terms of mission, the CFF was the closest to a TA agency that Australia has had. However, in terms of the operation and priorities of the commission (in selecting and conducting activities and projects), its political positioning (links with parliament or government), and the methods used (varied, with little standardisation), there was little resemblance to a TA agency.

The Resource Assessment Commission (RAC) was established by the Hawke Labor Government in 1989 to conduct inquiries and research into topics relating to the use of Australia’s natural resources. It was disbanded in 1993 after a change of government (Economou 1996). The RAC reported to the prime minister. It had an ongoing chairperson, commissioners appointed for specific inquiries and support staff and consultants available on demand. Its aims were to provide timely and high-quality reports to government, generate an information base, develop principles and methods for resolving resource disputes and maximise public participation.

The RAC had a broad purview in considering natural resources and their various uses; the environmental, cultural, social, industry and economic values of resources; and the implications for these values for resource use. The RAC conducted three major inquiries: the Kakadu Conservation Zone Inquiry, the Forest and Timber Resources Inquiry, and the Coastal Zone Inquiry. It was successful in its aim of uncovering the range of diverse views and perspectives on contentious issues and presenting options without favouring particular interests, as evidenced by its loss of favour with many interest groups. For example, the Kakadu Inquiry attracted criticism from mining interests, environmentalists and Aboriginal groups (Chapman 1992). According to Economou (1996), this failure to please anyone contributed to its demise.

The Productivity Commission (PC) was established in 1998 (as a revision of the Industries Assistance Commission established in 1974) to provide independent information and evaluations to assist government in policy formation. Its reports are received by government and are publicly available. It comprises approximately 10 commissioners appointed by the Governor General and is supported by a permanent staff. The PC, as its name suggests, has a particular focus on the economic aspects of topics. Technologies and technology issues are sometimes considered (e.g. chemicals and plastics regulation, medical technology), but constitute a relatively small proportion of its projects. However, technology is relevant to many of the broader topics covered (e.g. consumer product safety, telecommunications, airport services, energy efficiency). The PC may also consider broader issues relevant to science and technology (e.g. science and innovation).

In 2002, the PC conducted a study on genetically modified (GM) crops: ‘Modelling possible impacts of GM crops on Australian trade’. Data on the productivity of GM crops and on consumer resistance were collated from international surveys, estimates of regulation costs were factored in, and various scenarios were developed to assist in anticipating possible impacts on trade. A similar study was done in 2005 to assess the impact of advances in medical technology in Australia, which focused specifically on the impacts on health expenditure. In common with TA, the PC is broad in its perspective, taking into account the interests of the economy and community as a whole. Despite this, its scope is narrow in terms of its economic focus, and social issues are also framed by its productivity focus (e.g. promotion of employment, economic development). Despite this, the PC is a good example of an organisation commissioned by government to conduct assessments, but with independent standing and autonomy. The government may accept or reject its advice, so the PC potentially suffers less political pressure than ad hoc committees or panels.

The statutory commissions reviewed earlier varied considerably in their TA-like activities, with the RAC having the most systematic and broad processes in TA terms. The PC is limited by its legislated scope. While statutory commissions generally have a mandate and are positioned for wide public consultation and transparency (Chapman 1992), none has really engaged with inclusive and deliberative participatory methods. The statutory commissions are generally well resourced and have the freedom to establish appropriate time frames. They are generally recognised for their independence and are in a position to build reputation and capacity, making them potentially trustworthy. They have a mandate from, and report to, government, potentially giving them considerable influence. However, their independence and frank advice makes them vulnerable to political disfavour. This is also the case for TA agencies, and represents a significant challenge.

Summary of analysis

As described earlier and summarised in Table 1, Australia has had a number of examples of TA-like processes. However, they have been fragmented, uncoordinated and variable in quality and impact. The examples most like TA are the ad hoc inquiries, notably the Lockhart Review and the ALRC–AHEC Inquiry which entailed relatively systematic and broad assessments informed by an understanding of the social nature of technology. They considered a range of topics and interests, dealing with these in a balanced and transparent way in their reports. While all of the ad hoc processes involved public consultation, only the consensus conference was broadly inclusive, including marginalised groups and deliberative processes. Ad hoc inquiries respond to a particular issue and are tailored to a particular situation or need of government. As such, they are generally influential and may have a significant impact on policy. However, ad hoc processes lack many of the advantages of a permanentFootnote 10 organisation (discussed below), including trustworthiness through an ongoing reputation. Some of these advantages are demonstrated in the ALRC–AHEC Inquiry, which involved two permanent organisations.

Table 1 TA-like activities in Australia evaluated according to TA quality criteria

Technology agencies, such as the Australian Office of Nanotechnology, potentially could develop TA capacity but are constrained in their capacity to deal with issues across sectors or across technology areas. The new National Enabling Technology Strategy may overcome this limitation. More fundamentally, they tend to have multiple and potentially conflicting roles such as implementing national technology strategies, promoting community awareness, providing advice to policy makers, all of which may conflict with their potential role as a TA agency. Statutory commissions also provide a potential model for a TA organisation. None of the existing commissions in Australia has TA capacity as such, but their organisational arrangements and links with government provide useful lessons for the establishment of a TA agency, including the hard lesson of how to survive in a political environment. This is also informed by the history of international TA, notably the OTA in the United States. A well-positioned TA agency walks a fine line between independence and influence, and between autonomy and abolition (Sanz-Menéndez and Cruz-Castro 2004). An agency must balance pragmatism and idealism if it is to remain trusted and valued, at the same time as providing broad, systematic and democratically legitimate assessments. Measuring up to a clear and agreed set of quality criteria is one way that organisations can justify their ongoing existence.

The case for an Australian TA agency

Developing TA capacity in a permanent organisation has a number of advantages. Ad hoc processes set up in response to contentious issues are reactive rather than proactive. This leads to time frames being too short for quality assessment, particularly participatory assessment or a failure to produce timely results. A permanent organisation can anticipate upcoming controversies and begin assessments early, providing results of direct use in policy responses and potentially informing debate as it unfolds. This timeliness potentially counteracts the tendency for inquiries and reviews to delay decision-making (Chapman 1992). Secondly, ad hoc processes generally establish their own methods and procedures, which are extremely variable, and often have no standards for method or quality control. In contrast, permanent organisations can build capacity by establishing expertise, drawing on world’s best practice, evaluating according to quality criteria and providing ongoing training.

Thirdly, in ad hoc inquiries, credibility and expertise are based on high-profile participants, not on a recognised process or organisation. Unless participants are selected on a multipartisan basis, the process and results may be biased in favour of certain political goals and perspectives, which undermines the legitimacy (and quality) of the process. In addition, while participants are generally experts, they are not necessarily experts in assessment. As well as developing assessment expertise, a permanent organisation can build reputation and relationships with decision-makers and stakeholders and can involve a range of experts and stakeholders through commissioned studies or interactive activities. Fourthly, permanency creates opportunities for a longer term, futures approach. As well as drawing on previous assessments to inform current work, a permanent organisation can integrate insights and experience over time and across a range of technology areas to provide oversight and foresight for the science and technology system, and for technology policy generally.

A formal TA agency in Australia would potentially provide the following:

  • systematic, integrated, inclusive assessments of the societal implications of new technologies

  • information for policy, media, public; contributions to technology debates (information and dialogue)

  • oversight and foresight, including capacity to consider broad, cross-sectoral issues, and a capacity to scan for upcoming technologies and issues

  • independent, iterative analysis of science and technology policy

  • stable, trustworthy assessment capacity, institutional memory, training in assessment, connection with international best practice

  • a platform to develop deliberative, participatory engagement exercises.

A formal TA agency would need independence, but also needs strong links with decision-makers. It would need autonomy in establishing topics, time frames and methods, but also mechanisms to deliver the results where they will have most impact. As well as contributing to policy making and the governance of science and technology, it would provide methods and models of deliberative engagement generally (Dryzek 2000). While some specialist tasks could be contracted out to external practitioners, the integrative nature of TA and the lack of TA expertise in Australia calls for the development of considerable in-house capacity within a TA agency.

There are a number of obstacles to establishing a permanent TA agency in Australia. Such an organisation would require considerable ongoing funding. The costs of recruiting high-quality staff, organising engagement activities, commissioning expert studies and maintaining communications would all be significant. Establishing ad hoc processes for each TA is arguably inefficient, particularly when quality is variable. However, justifying the cost of a permanent organisation, particularly before its value is demonstrated, is clearly a challenge. Related to this are political obstacles to gaining support for a new agency. It is one thing to argue that improving TA will contribute to democracy and better social outcomes. It is another thing to convince politicians that thorough assessments of controversial technology issues, which invite public input and deliberation, and transparently balance divergent views and interests, are something they need. Attention can be drawn to the positive experience Europe has had with formalised TA agencies, and to the potential impacts of failing to establish TA.

Australia has done without TA to date, but this has arguably contributed to protracted and polarised technology debates and inertia in technology policy making. Australia lagged behind other countries, notably in Europe (Shohet 1996), in establishing a framework for biotechnology regulation, despite earlier calls for such a framework (House of Representatives SCIST 1992). Even after the eventual establishment of such a framework in the form of the Office of the Gene Technology Regulator, the authority of this federal regulation was undermined by state moratoria on GMOs in every state of Australia (Deakin 2008). The current debate on climate change and mechanisms for carbon emissions control is causing major political divisions, fuelled by scientific controversy and scepticism (Alexander 2009). In the absence of balanced, independent information and thorough, transparent assessment processes, policy making in technology areas is subject to influence by powerful lobby groups (Hendriks 2002; Karapiperis and Ladikas 2004; Kelly 2009). A failure to engage the public is likely to result in ongoing backlashes from activist non-government organisations (NGOs) and community campaigns. For example, NGOs have organised to oppose nanotechnology. In general, Australia has failed to embrace public engagement in science and technology (Schibeci et al. 2006; Hindmarsh and Du Plessis 2008; Ross 2007) and has had limited engagement with foresight and national priority setting for science and technology (Martin and Johnston 1999). Perhaps, a new global governance agenda, stimulated by the repercussions of the global financial crisis and climate change, will provide a catalyst for new approaches to assessment and governance, including TA, in Australia.

Conclusion

Technology assessment can make an important contribution to science and technology policy. As well as providing information about potential risks, consequences, contexts and opportunities of technologies, it can improve communication between decision-makers, technology designers, stakeholders and the public at large, increasing democratic involvement in uncertain and value-laden decisions about science and technology. This can shift fruitless and polarised debates towards creative and constructive dialogue. Through its contribution to improving technology policy and governance, TA can potentially improve the social outcomes of technological development.

Developments in international TA methods, theory and standards provide a basis for TA quality criteria. Good TA is systematic in the sense of being rigorous, reflexive, informed by existing theory and practice, and employing formal mechanisms of quality control. Good TA is broad in terms of disciplines, topics and perspectives and integrates information from multiple sources. It is inclusive in facilitating participation of a wide range of relevant actors in ways that are deliberative, engaging and transparent. Quality in TA is only achievable with adequate resources and time. In addition to these method characteristics, TA can be judged by impact criteria, including the trustworthiness of the TA organisation, which should be reputable, independent and multipartisan; and the influence of the organisation on policy, opinion or action through links to decision-makers and good communication.

Our evaluation of recent TA-like activities in Australia using international TA quality criteria indicates that TA capacity has been variable in quality and impact and has been fragmented, uncoordinated and not well informed by international best practice. We believe there is a strong case to establish a formal TA agency in Australia, which would build capacity in TA to deal proactively with technology controversies, to contribute to more socially aware technological development and to provide the foresight, oversight and community dialogue that informs technology governance in other countries, notably in Europe. In addition, this kind of organisation could contribute to making Australian technology policy more responsive to social context and more consistent with good policy-making practice and informed by a range of public perspectives rather than being dominated by expert judgements (Head 2008).

Such an organisation would require considerable investment. However, in the absence of TA capacity, continuing failure to adequately engage the public in science and technology decision-making will lead to more polarised and adversarial debates with NGOs, the media and community. Meanwhile, technology policy making will continue to be subject to lobby group influence and will be slow to react to emerging problems and conflicts. Overall, failure to improve democratic governance of emerging science and technology, especially in the context of uncertainty, will contribute to social and environmental impacts, distrust and conflict, and lost opportunities to harness technological development for the public good.