Poiesis & Praxis

, Volume 7, Issue 1–2, pp 55–71 | Cite as

Collingridge’s dilemma and technoscience

An attempt to provide a clarification from the perspective of the philosophy of science
Focus

Abstract

Collingridge’s dilemma is one of the most well-established paradigms presenting a challenge to Technology Assessment (TA). This paper aims to reconstruct the dilemma from an analytic perspective and explicates three assumptions underlying the dilemma: the temporal, knowledge and power/actor assumptions. In the light of the recent transformation of the science, technology and innovation system—in the age of “technoscience”—these underlying assumptions are called into question. The same result is obtained from a normative angle by Collingridge himself; he criticises the dilemma and advances concepts on how to keep a technology controllable. This paper stresses the relevance of the dilemma and of Collingridge’s own ideas on how to deal with the dilemma. Today, a positive interpretation of technoscience for effective TA is possible.

Zusammenfassung

Ausgangspunkt ist das so genannte Collingridge Dilemma, das TA vielfach herausgefordert hat und noch immer herausfordert. Das Dilemma wird analytisch rekonstruiert. Wir führen drei zugrunde liegende Annahmen aus: eine zeitliche, eine wissensbezogene und eine akteurs- und einfluss-orientierte. Wird der derzeitige Wandel des Wissenschafts-, Technik- und Innovationssystems—im Zeitalter von Technoscience—mit betrachtet, dann zeigt sich, dass die zugrunde liegenden Annahmen nicht mehr gut begründbar sind. Das vorliegende Papier zeigt, dass dennoch die Überlegungen Collingridges, wie mit dem Dilemma umgegangen werden kann, für heutige TA-Debatten relevant bleiben und dass eine positive Interpretation von Technoscience für die Möglichkeit von effektiver TA gegeben ist.

Résumé

Le dilemme de Collingridge porte sur l’un des paradigmes les plus établis et qui pose de réels défis à l’évaluation des choix technologiques (TA). Ce document vise à reconstruire le dilemme de manière analytique et à expliquer trois suppositions sous-jacentes: une temporelle, une basée sur la connaissance et enfin une basée sur l’acteur et le pouvoir. A la lumière des récentes transformations du système de la science, de la technologie et de l’innovation—à l’âge de la technoscience—ces suppositions sous-jacentes sont remises en question. Collingridge lui-même est arrivé à ce résultat sous un angle normatif; il a critiqué le dilemme et propose des concepts sur la manière de garder la technologie contrôlable. Ce document souligne l’importance du dilemme et des propres idées de Collingridge sur la manière de traiter ce dilemme. Une interprétation positive de la technoscience pour une TA efficace est possible aujourd’hui.

1 Introduction

Are we trapped in a control dilemma? Yes, indeed, we are! This seems to be the standard position. Since the very beginning of Technology Assessment (TA) in the 1960s, a dilemma formulated by David Collingridge (1980) more than a decade later seems to have become widespread among the TA community (cf. Wagner-Döbler 1989; Bechmann and Frederichs 1996: 12; Gloede 1994). In fact, TA can be regarded as the science-based effort to meet the challenges and to counteract the dilemma by deepening and broadening the knowledge basis for assessment procedures and control strategies.

But is there really a way out, as the advocates of TA implicitly suggest? The issues addressed by the dilemma seem to be frustrating and, even worse, paralyzing. Obviously, we cannot overcome the obstacles to controlling a technology. If we agree with the assumptions in Collingridge’s approach, we run into the dilemma. However, one of our theses, which we present in this paper, is that some of the underlying assumptions supporting the dilemma are not well founded. A number of challenges posed by the dilemma vanish in light of the technoscientific development over—at least—the last 30 years.1 New concepts try to describe a transformation: Mode-II, post-normal, post-academic, post-paradigmatic, and technoscience. We mainly consider the concept of technoscience. In this paper, we indicate that technoscience not only presents a problem or a challenge to TA, but rather an opportunity.2

In the following, we will reconstruct the structure of Collingridge’s (so-called) dilemma and explicate some of the implicit assumptions that give rise to the “dilemma” (temporal, knowledge, power/actor assumptions) (Sect. 2). We will then examine Collingridge’s own critique of this dilemma and his attempt to overcome it. Fascinatingly, Collingridge’s own approach turns out to be in line with very recent developments in technoscience (Sect. 3). Today’s technoscience raises questions about the assumptions that lead to the dilemma, akin to Collingridge’s own ideas (Sect. 4). We conclude that, on the basis of this clarification, recent concepts of TA could be (re)considered and debated (Sect. 5). But this remains a task for further study (cf. Liebert and Schmidt, Paper B, this volume).

2 Collingridge’s dilemma and its underlying assumptions

Collingridge’s dilemma is one of the most well-established paradigms presenting a challenge to TA. In his influential work The Social Control of Technology, Collingridge speaks of “a dilemma of control”:

“The social consequences of a technology cannot be predicted early in the life of the technology. By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult. This is the dilemma of control. When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming”. (Collingridge 1980:11)3

Although Collingridge is famous for articulating the dilemma, he is not arguing that the dilemma cannot be overcome. The dilemma’s societal and epistemological status cannot be regarded as a universal, historically invariant proposition. In other words: the dilemma’s proposition is not an unhistorical socio-technological or socio-anthropological constant. Rather, the dilemma has emerged and, to put more emphasis on human decisions and actions, has been constructed (intentionally or not) by men. Even though the dilemma was descriptively well founded for some case studies in the twentieth century,4 Collingridge’s normative goal is to “propose … a new way of dealing with the dilemma of control.” (ibid. 11)

In order to reconstruct the argumentation entailing the dilemma, we have to clarify the underlying assumptions. The dilemma makes use of a notion of control that refers to knowledge (e.g., epistemic knowledge about the societal future) and power and actors (e.g., political power to change conditions): “It must be known that a technology has, or will have, harmful effects, and it must be possible to change the technology in some way to avoid the effects.” (ibid. 11) To control technology, both knowledge and power to act are necessary: Knowledge without power lacks impact; power without knowledge is blind. Collingridge reflects, therefore, on a knowledge dimension and a power dimension to formulate the dilemma. But this is not sufficient. In addition, Collingridge considers a temporal dimension of control, namely early and late control.5 The major challenging question is: when to control? Early control might be possible due to the power to change situations and boundary conditions, but it lacks knowledge about the consequences; late control can rely on much knowledge but is mainly powerless. In order to reconstruct the dilemma, these three dimensions have to be conjoined: time, knowledge and power/actors.

Are these assumptions obvious? Collingridge does not provide any justification that would make these assumptions, at least to some extent, plausible. He mainly presents case studies (e.g., lead in petrol, military technology, the nuclear arms race, energy, breeder reactors, electricity systems, the Manhattan project). Collingridge chooses the case study approach in order to analyse the “roots of inflexibility” (ibid. 45 et seq.) and to support his thesis that a dilemma is predominant throughout the complex history of science, technology and society. Let us look at this point in more detail. Collingridge’s dilemma can be traced back to three theoretical frameworks: (1) innovation theory for the temporal dimension relating to the object under consideration, (2) ethics and philosophy of science for the knowledge dimension and (3) action theory for the power dimension. From the angle of philosophy, one might recognise here: an ontological, an epistemological and a methodological assumption.

2.1 First thesis: the temporal dimension

The temporal assumption underlying Collingridge’s dilemma is based on innovation theory and relates to the ontogenesis of a new technology: Innovation is regarded as a temporal process, namely the development of technology in society (cf. Fagerberg et al. 2005; Schmidt 2008). Classic innovation theories date back to the planning optimist decades, the 1960s and 1970s.6 Although different types of innovation theories compete with each other today, the classic paradigmatic type is still the predominant underlying view; it consists of a linear chain.7 According to this classic type, innovation is considered as being ordered along a chain of distinguishable time phases. One phase is linearly, causally followed by another; later phases are based on earlier ones, creating a perfect succession of different phases: fundamental science, invention, innovation, development, production, diffusion and consequences. Collingridge uses the same terminology and explicitly “development” and “diffusion” (Collingridge 1980:16/17). By framing innovation from this mechanist–determinist chain perspective, the question of which of the two end points of the linear chain should be considered as most important has become a matter of dispute: Which is the main driving force triggering the process—science (“science-push”) or market dynamics (“demand-pull”)?8 Although Collingridge’s dilemma remains vague as regards the driving forces and their interactions, it stresses the relevance of science and, hence, favours looking at the innovation chain from the science end.9 In sum, a linear chronology in the ontogenesis of a new technology is presupposed by the dilemma. It is based on a clear time-indexing of ordered or orderable phases; without the option of a temporal identification of “early” and “late”, the dilemma would dissolve.

2.2 Second thesis: the knowledge dimension

According to the dilemma, a key obstacle to dealing with science and technology in society is the lack of knowledge about consequences during the early phases. Hence, the dilemma presupposes a strong demand for adequate knowledge. There are, indeed, a number of assumptions underpinning the dilemma with regard to the knowledge dimension—some of them are supported by positions in ethics and in the philosophy of science. Ethics10: Knowledge seems to be central to any kind of rational decision-making and for goal-oriented action. The Collingridge dilemma takes a perfect consequentialist perspective and specifies knowledge with regard to the prediction of future states. At the same time, intentions and actions themselves are not considered explicitly, and normativity is neither addressed and reflected on nor argumentatively set up or revised. By highlighting the consequences, as well as neglecting the intentions and actions (and virtues and affects), the Collingridge dilemma supports a well-known position in philosophical ethics: consequentialism and therefore, to some extent, utilitarianism. Other concepts of ethics, such as deontological ethics, virtue ethics, justice ethics or discourse ethics, are not perceived as relevant in this field. Philosophy of science: Collingridge’s dilemma is based on a certain understanding of knowledge. What is lacking in early phases seems to be a specific kind of scientific knowledge, e.g., objective, pure, value-free, quantitative and/or prediction-enabling knowledge; accordingly, (knowledge) experts are regarded “as neutral, disinterested, unbiased” (ibid. 12).11 The laws and theories of physics appear to be the model for knowledge in general. The mainstream of philosophy of science throughout the twentieth century has emphasised this physics-oriented view of knowledge. Thus, Collingridge’s dilemma is based on the thesis that there exists a demand for a certain, ambitious type of knowledge—a demand that can be found as guiding visions and requirements in various traditional concepts of TA, especially in approaches adopted in parliamentarian TA offices.12 This requirement is obviously so strong that it can hardly be fulfilled.13 The reasons for the limits in obtaining this ambitious type of knowledge might be identified as either practical (methodological) or relating to principle (ontological). (a) In the “customary” understanding of Collingridge’s dilemma, the limits are considered as methodological ones. The limits are obstacles that can be shifted by developing powerful forecasting tools (ibid. 19). (b) Collingridge’s own position seems to be different. He advocates a stronger, ontological thesis. From an ontological perspective, it can be stressed that the above-mentioned unknowns are, in fact, unknowables, deeply rooted in the structure of the complex socio-technological reality itself.

2.3 Third thesis: the power and actor dimension

Collingridge’s dilemma considers the freedom and power of actors to set their goals and to choose their means accordingly (means-end relation). Besides general assumptions of traditional action theories (cf. Wright 1971), Collingridge’s dilemma maintains that actors have more power in (or impact on) the early phases to change a technology than they do in later phases. In other words, actors seem to have, according to the dilemma, more power to control the science end than the market and consumption end (of the innovation chain); an intervention in science is possible and will have an impact. Besides this strong presupposition concerning early power options, another crucial question emerges: Who is to be considered an actor? The notion of this term is not addressed and remains vague throughout the text. Collingridge frequently uses the first-person plural “we”, “us” and “our”, e.g., “Can we control our technology?” (Collingridge 1980:11)14 At first glance, it remains obscure as to who could qualify as an actor: individuals, groups, institutions, or civil society as a whole, scientists or politicians, science organisations or governmental agencies, experts or lay people? Although the dilemma is not explicit at this point, it appears to assume that the main actors are the democratic legitimised parliaments, the government itself and governmental institutions. Thus, the social control of technology, as the book is entitled, is performed mainly by politicians. This is a kind of a decisionistic assumption that emphasises the role of professional, legitimised decision-makers.15 According to Collingridge’s dilemma, the actors who try to control our complex science, technology and innovation system do not belong to the system themselves; they are not internal players and participants within the system. Control theorists regard the science, technology and innovation system from an external perspective. From this angle, the term “controlling”—and not “shaping”—does indeed seem to be the appropriate term.16

In order to argue in line with Collingridge’s dilemma, one therefore has to consider (1) innovation theory, (2) certain positions in philosophy of science and ethics and (3) action theory.17 Moreover, at this point, it is interesting to recognise that Collingridge’s dilemma, including the traditional discussion about control, is based on a mechanistic–deterministic view of the complex intersection and interaction of science, technology, innovation and society.18 Following the mechanistic view, two contrary judgments have been made: (a) From the 1950s to the early 1970s, the positive view of control optimism was widespread among politicians, scientists and science managers. A social control of technology is possible, because we can control the science end; this is perfectly in accordance with Collingridge’s dilemma. Their control optimism was supported by the success of the first big science project, the US Manhattan project and later the Apollo project. (b) Inversely, cultural pessimists19 suspect the emergence of a technological determinism, partly interlaced with a technocratic superstructure. Internal driving forces and the law-like dynamics of the technoscientific system appear to resist any intentional action of societal actors. The control pessimists did not subscribe to the presupposition of Collingridge’s dilemma that early control of the technoscientific system is an easy task (power dimension). Thus, neither early nor late control is possible. Interestingly, both judgments, the optimistic and the pessimistic, share a mechanistic understanding. They hesitate to open “Pandora’s box”.

3 Collingridge’s own approach to coping with and eliminating the dilemma

Collingridge is famous for articulating the dilemma; however, what is not well known is that he attempts to overcome it by proposing “a new way of dealing with the dilemma of control”. Collingridge considers two strategies. The first one could be called a predictionist approach: trying to deepen and broaden the (quantitative) knowledge basis about the societal consequences of a technology while it is in its infancy. The goal of the predictionist approach is to address the knowledge dimension of the dilemma and to develop powerful forecasting tools that could provide “objective” information on which to act in the early stage of a technology. The notion of “knowledge” refers to quantitative, mathematical, trajectory-oriented predictions of consequences. The classic concept of TA in the late 1960s mainly referred to this kind of knowledge; classic TA aimed at producing or acquiring the relevant quantitative knowledge for political decision-making and action. Although this is “the customary response to the dilemma”, Collingridge regards the predictionist approach as a “serious misconception[s]” since “harmful effects of a technology can be identified only after it has been developed and has diffused”; “a whole bundle of unknown factors” will remain (ibid. 19/17). These unknowns, Collingridge argues, reflect the very nature of socio-technological reality itself. It is not solely methodological obstacles and state-of-the-art issues of recent science, but rather ontological limits that restrict the predictionist approach: If we consider human action, any kind of prediction could eventually fail. The future is, at least to some extent, open to intentional or accidental changes induced by human action. The predictionist approach requires—according to Collingridge—unbiased experts, including an observer’s perspective, which is not feasible (ibid. 12); every expert is also a participant in science and society. The development of a new technology does not resemble a machine, which is governed by mathematical laws that allow predictions with some degree of confidence.

The predictionist approach is not Collingridge’s own position. In a certain sense, he is more radical than the predictionists and wants to tackle the dilemma at its roots: What can be done in advance to avoid running into the dilemma? He prefers a second strategy to cope with and, to put it more precisely, to overcome the dilemma by addressing both the power dimension and the knowledge dimension. There is no other way out of the deadlock than by strengthening and developing the power dimension, i.e., the power of scientists, politicians and political institutions, throughout the technological innovation process. The essence of controlling technology is not in forecasting its social consequences, “but in retaining the ability to change a technology, even when it is fully developed and diffused, so that any unwanted social consequences it may prove to have can be eliminated or ameliorated” (ibid. 20/21).20 Collingridge’s normative view is to maintain the “freedom to control technology” and to develop organisational structures and scientific tools to deal with the resistance to such control (ibid. 19). Here, he changes his perspective from a (historically based) descriptive to a (future-oriented) normative one.21 “How decisions ought to be taken” and “how can we make decisions about technology more effectively” (ibid. 20). In order to keep a technology controllable, Collingridge presents a number of criteria for assessing a technology’s development which are also highly relevant to TA today: (a) corrigibility of decisions, (b) controllability, e.g., the control of unpredictable systems, (c) maintaining flexibility (to preserve the option of being able to choose between alternatives), and, in addition, (d) insensitivity/robustness to errors (cf. ibid. 32 et seq.).22 Thus, Collingridge believes that technology can, and should, be controlled via decisions that are easy to correct;23 we should ensure that it is possible to revise decisions. A normative circle emerges: controlling is feasible because we should develop technologies that can be controlled. According to Collingridge, there is then no need to tackle the dilemma, but rather to ensure that the dilemma will not emerge: elimination is the best strategy for coping with the dilemma.24

In addition to addressing the power dimension of the dilemma, Collingridge also considers the knowledge dimension. Knowledge is the indispensable key to maintaining the power to keep a technology controllable. But what kind of knowledge should be fostered? Collingridge advocates a wider notion of “knowledge”, in contrast to that of the predictionist approach. Value and normative knowledge on the one hand, and, on the other, knowledge about uncertainties, ignorance, risks, and, as we would put it today, also non-knowledge, should be taken into account as the core of “knowledge”. The basic idea is to develop a “theory of decision making under ignorance” that becomes part of political decision-making processes (ibid. 12).25 Thus, ignorance should become an intrinsic element of decision-making. A discourse about values is indispensable: dealing with ignorance and uncertainty requires the reflection, revision, justification and application of values (ibid. 184). Because of the obvious predominance of values in R&D processes, Collingridge raises concerns about the traditional view of expert knowledge and quantitative-based expertise:

“An expert is traditionally seen as neutral, disinterested, unbiased and likely to agree with his peers. On the view proposed here, none of these qualities can be attributed. Instead, an expert is best seen as a committed advocate, matching his opinions with other experts who take a different view of the data available to them in a critical battle”. (ibid. 12)26

Collingridge asks whether “the division into the two realms of science and policy [is] really as clear cut as Handler [then the President of the US National Academy of Sciences] suggests?” (ibid. 188)27 Undeniably, there is no clear line between science and policy; the line is a matter of dispute. Collingridge argues against the shortcomings of this traditional dichotomist view, which he calls “Model 1” (ibid. 183 et seq./192 et seq.). Remarkably, the well-known notion “Mode 1”, coined by Gibbons et al., conveys a cognate connotation and uses similar terminology (Gibbons et al. 1994).28

According to Collingridge, the main

“problem of Model 1 was its simple-minded division between advice [=quantitative expert’s knowledge] and policy [=value and decision knowledge] which we saw to be untenable. Here again, Model 2 performs better. On Model 2 the advisor can be seen as much more of an advocate, actively engaged in the policy debate”. (Collingridge 1980:192)

Experts participate in decision-making processes on various levels—in the laboratory, in scientific communities, in parliamentary processes and in technoscientific civil society. The participation and engagement of scientists is not a disadvantage.29 Rather, it is necessary, since “[t]he making of policy decisions requires more than the digestion of delivered facts” (ibid. 188). Facts are not pure, naked and value-free. Model 2 considers why disagreement between experts is so common in the making of complex decisions. As Collingridge argues, scientific facts, and in particular,

“[d]ata can be interpreted in a number of ways and different experts will favour different interpretations which they can then fight over. Disagreement and debate is not at all shocking, it is a sign of a healthy and exploring science, searching for the best way of seeing some set of data. What Model 2 shows is that it is urgently necessary to develop a theory of decision making which can accommodate the fact that experts can be expected to disagree”. (ibid. 191)

There is, however, no easy way out. The frequently suggested, consensus-oriented idea of a “science court” fails. In many cases a consensus among experts is not feasible, contrary to what Model 1 might suggest. Decisions, nevertheless, have to be taken continually.

“In the absence of consensus it is essential to preserve the decision maker’s ability to detect error in his decision and his ability to correct it. The scientific debate between the experts should not, therefore, halt when a decision is made, it should continue because a consensus may be reached which shows the original decision to have been wrong, so that it must be corrected. Options which are highly flexible, insensitive to error, and easy to correct should, therefore, be favored” (ibid. 194).

To summarise, Collingridge argues that the dilemma can and should be eliminated. He considers the temporal, the knowledge and the power dimensions. First, the temporal dimension: Collingridge agrees partly that a temporal dimension is obvious for all R&D processes. He argues, however, that we can avoid the problem of path dependency and entrenchment by implementing procedures of continual monitoring: controlling is feasible because we should develop only technologies that are controllable and ensure their controllability (ibid. 161). Second, the knowledge dimension: Collingridge does not focus mainly on predictions and predicted facts, but also considers values and “value judgments” as an extended view of “knowledge”, which also includes ignorance, uncertainty, and risk (ibid. 161). According to Collingridge, both facts and values have to be taken into account (Model 2). Third, the power and actor dimension: Collingridge does not accept the power assumptions that lead to the dilemma. He considers the complex entanglement between experts (scientists and engineers) and decision-makers (politicians, governmental officials, managers) in controlling the R&D process (ibid. 183 et sqq.,). Thus, Collingridge raises objections to all three dimensions leading to the dilemma.

4 Technoscience

It is fascinating that Collingridge’s normative approach, including his objections to the dilemma, can be supported today by a—more or less—descriptive analysis of the recent science, technology and innovation system (“technoscience”). Considering technoscience will help to address and review the assumptions of the dilemma—and also show that Collingridge’s normative approach and the technoscience thesis share many ideas. Since the 1980s, public debates and the rhetoric on science and technology have been changing (cf. Weber 2003). Various concepts claim to perceive a historical transformation of the science system, among them: Mode-II, post-normal and post-academic sciences; late- or post-modern sciences; technosciences.30 A “seamless web” appears to be emerging among science, technology, society and industry (Hughes 1986). This transformation has challenged the social sciences and humanities. New approaches, such as the Social Construction of Technology (SCOT) or the Actor-Network Theory, have been developed. Some ideas can be traced back to another famous concept suggested in the early 1970s: the finalisation concept with the notion of post-paradigmatic science conceptualised as an extension and complementation of the Kuhnian model (Böhme et al. 1983). The finalisation concept, however, still draws lines between the science internal and the science external. It is not concerned with the mixing and merging of science, technology and society that we (might) observe today in various areas.31

The concept of technoscience here is more radical. Hottois (1984), Haraway (1995), Latour (1987), Ihde (2003), Weber (2003) and Nordmann (2005) coined and advocate the term “technoscience” to describe the historical transformation in the “culture of science”. Nordmann, for instance, identifies various “symptoms for the change of culture from science to technoscience” (Nordmann 2005:215). Societal and economic interests, purposes and goals have become predominant in present-day sciences. In addition to both the context of discovery and the context of justification, the context of application—knowledge generated in “broader, transdisciplinary social and economic contexts”—turns out to be a central element of recent knowledge production (Gibbons et al. 1994:4). Traditional boundaries, well-established categories and presupposed dichotomies are becoming blurred, e.g., the boundaries between science, technology and society, between scientific disciplines, between theory and practice, between nature and culture, or between facts and values.32 These new kinds of entanglements seem to be hard to describe and difficult to understand: we lack adequate terms and a clear terminology. This is why “technoscience” remains, at present, more or less a programmatic umbrella term. It is not surprising, therefore, that there is still an ongoing debate about the main content, e.g., about whether we can draw a (historical and systematic) line between modern sciences and late-modern technosciences in this respect (epochal break or not) and what can be regarded as criteria for the diagnosis of the shift in the science or knowledge system. Is there really a differentia specifica?33 Even more serious is the fact that the very topic of the discourse is still a matter of dispute: (a) Is the discourse on technoscience merely a discourse on the way of seeing, perceiving and speaking about present-day science—a discourse on the discourse and the rhetoric (“constructivism”)? (b) Or, more essentially, does the discourse refer to technoscientific practice, the objects, knowledge types and methods (“realism”)? At least some agreement has been reached. We have chosen just three of these points, insofar as they might be relevant for dealing with Collingridge’s dilemma.

4.1 First

The technoscience thesis emphasises the dominance of science in present-day society. An accelerated scientification appears to be taking place, e.g., a scientification of technology, of the innovation process and of society in general. To reflect the predominance of scientific knowledge, Gernot Böhme and Nico Stehr coined the term “knowledge society” (Böhme and Stehr 1986). According to Daniel Bell, the “post-industrial society” is based on “science”, on “theoretical knowledge” and on “intellectual technology” (Bell 1975). Gibbons et al. stress that “scientific knowledge production becomes diffused throughout society.” (Gibbons et al. 1994:4) Because science is everywhere, we can no longer draw a line between science and technology, or between natural and engineering sciences. The term technoscience highlights this lack of distinction. Scientific knowledge becomes ubiquitous throughout the entire innovation process.34 Using the terminology of classic innovation theory, all phases are dominated by scientific knowledge. Even the production of technological goods, their diffusion, consumption, use or recycling depend strongly on scientific knowledge; technology in general depends heavily on scientific advancement. Thus, there is no linear time ordering as presupposed by the dilemma and its classic linear innovation theory. Arguments against a universalistic view of the innovation process and the linear phase order are raised by many scholars of science and technology studies (cf. Hackett et al. 2008; Collins and Pinch 1998). In consequence, the temporal dimension of Collingridge’s dilemma might actually exist, but the processes are significantly more multi-faceted, non-linear, complex and interactive than the dilemma presupposes.

4.2 Second

The technoscience thesis also stresses the functionalistic, teleological and application orientation of present-day sciences. Science and society are becoming more and more technologised. Scientific knowledge is developed from a technical perspective, from the context of application and implementation, far beyond the traditional demand-pull view—with consequences also for the internal structure of science and their theories. The theoretical justification and empirical evidence of a new scientific theory or paradigm is less important than its utility for new technologies, for new technical applications and inventions.35 According to Gibbons et al. (1994:5), knowledge is “not developed first and then applied to the context later by a different group of practitioners.” The Baconian structure is obvious today (Schmidt 2007).36 Purposes, objectives and interests circulate into the core of science.37 The limitation of quantitative prediction knowledge—the knowledge dimension of the dilemma—might be an obstacle to addressing the future from the perspective of exact mathematical sciences. But thanks to the purpose orientation of present-day science we are already able to know much in advance, even before a certain technology is fully developed. This fact challenges the knowledge dimension in Collingridge’s dilemma.

4.3 Third

The technoscience thesis criticises any universalist account of science and technology. The age of the big narratives with mono-causal explanations appears to be over. Science and technology—today’s technosciences—are deconstructed by most advocates of the technoscience thesis. They claim to open the black (Pandora’s) box of science and technology and consider the context-dependent and situated practices of science in action. Their (de)constructivist approach reveals a broad variety of plural, multi-faceted and complex phenomena that resist any unification (Hackett et al. 2008).38 It is claimed that sciences and technologies are not determined by (external) objects, e.g., by nature or by technological ideal forms, but are socially and societally constructed. What does seem to be true, at least, is that various actors do change and shape technoscience every day; with other words, they construct technologies. When technoscience is framed from this perspective, traditionally presupposed dichotomies dissolve. In particular, values, facts and artefacts are entangled. Science has never been as value-free and decontextualised as has been claimed (Liebert 2003); technology has never been a mere instrument; means and ends have never been “ontologically” different. The science internal on the one hand and the science external (society?) on the other hand are interlaced.39 Insofar as both modern society and modern science have never reached the state of these ideal dichotomies, they have “never been as modern” as they have claimed to be.40 With regard to Collingridge’s dilemma, the conclusion we can draw from this (constructivist) picture of technoscience is that societal and scientific actors shape the science, technology and innovation system. Numerous subpolitical actors are prevalent and extremely influential—from laypeople, citizens and managers to scientists and experts. These actors engage and participate in complex procedures of shaping technology. They are not external observers, unbiased experts and outer decision-makers; such a detached observer’s perspective is not feasible.41 Beyond the externalist/internalist dichotomy, different procedures for shaping technology replace the view of unidirectional, singular control action, as presupposed by the dilemma. Novel regimes of non-hierarchically, often informally organised governance structures substitute the hierarchical control ideal of governmental institutions. Scientists and engineers are actively involved. A “technoscientific citizenship” seems to be emerging, and the construction and shaping of the seamless web is taking place: governance and subpolitical action instead of the power and actor dimension in Collingridge’s dilemma.42

To summarise, although technoscience remains a vague umbrella term, three aspects can be underlined. First: Scientific knowledge has become ubiquitous (scientification). The assumptions of classic innovation theory—the linear chain model and the dilemma’s temporal dimension—are no longer evident. Second: Science is dominated by the context of application; it is technologised and purpose-driven. This challenges the knowledge dimension in Collingridge’s dilemma. Third: Various actors on different levels construct and shape technologies. A reconsideration of the power/actor dimension in Collingridge’s dilemma would appear to be necessary.

5 Summary

With regard to one of the most well-established paradigms presenting a challenge to TA—Collingridge’s dilemma—this paper has reconstructed the dilemma from an analytic perspective and explicated three assumptions underlying the dilemma: the temporal, knowledge und power/actor assumptions. It is interesting that Collingridge himself provides strong arguments against the dilemma from various perspectives. He proposes normative criteria in order to keep a technology controllable (corrigibility, flexibility, robustness to errors). A normative (TA-) circle will then emerge: controlling is feasible because we should develop technologies that can be controlled. The main criterion for TA would, therefore, be to maintain controllability. TA is successful if, and only if, the dilemma does not emerge.

In terms of the three assumptions underlying the dilemma, Collingridge demonstrates that: (1) Regarding the temporal dimension, we can avoid the problem of path dependency and entrenchment by implementing procedures of continual monitoring in order to ensure the criterion of controllability. (2) The knowledge dimension does not appear to be as severe as the dilemma maintains. Collingridge shifts the emphasis from prediction and quantitative aspects to a normative basis with values, “value judgments” and decisions as an extended view of “knowledge”, including ignorance, uncertainty and risk. (3) In respect to the power and actor dimension, Collingridge reflects on and advocates the complex entanglement between experts (scientists and engineers) and decision-makers (politicians, governance officials, managers) in controlling the R&D process.

Moreover, it is fascinating that Collingridge’s normative approach, including his objections to the dilemma, is supported at present by an analysis of the recent science, technology and innovation system. Considering technoscience helps to address and review the assumptions of the dilemma. The task of drawing conclusions from this review for an advancement of TA in the age of technoscience requires further study (cf. Liebert and Schmidt, paper B, this volume).

Footnotes

  1. 1.

    Collingridge conducted his case studies in the 1970s and earlier. Today, we have to consider that the science, technology and innovation system seems to be in a process of transformation.

  2. 2.

    Embracing this opportunity, however, implies facing TA with several requirements. Our argument is that intentions, purposes and goals dominate even the domain of (former!) fundamental science and basic research (cf. Liebert and Schmidt ‘Towards a prospective Technology Assessment’ in this volume, referred to hereafter as Liebert and Schmidt, Paper B, this volume). There is a new (and, to some extent, fairly old Baconian) instrumentalist view of science, namely as technoscience. Insofar as technoscience is a purpose-driven mode of science, TA is in a much better position to enter into a normative discourse about the purposes and potentials in the early phases of agenda setting in R&D programs.

  3. 3.

    The dilemma of control may imply control pessimism, where the development of a technology cannot be intentionally shaped by societal actors; this pessimism might support the position of technological determinism.

  4. 4.

    A central example that Collingridge provides is “entrenchment”. After technologies have been developed and diffused, different types of technologies can depend on each other and, thus, constitute a complex nexus of various dependencies. When a technology has entered the situation of entrenchment, it stays and evolves on the path like a train on a track—this is a kind of technological determinism.

  5. 5.

    However, he is not very specific about the meaning of “early” and “late”.

  6. 6.

    There are, of course, much earlier precursors, such as Schumpeter, Taylor and Marx.

  7. 7.

    Many present-day researchers still refer to this form, although they modify this linear chain model.

  8. 8.

    It is interesting to note that Collingridge presupposes that the science side can be controlled much more easily that the market side.

  9. 9.

    The linear chain model does not consider non-causal random interactions, non-linear butterfly effects and complex feedback loops; actors and agents—individuals, groups and institutions—are not taken into account, and the cultural and political sphere is not regarded explicitly. In order to overcome these obvious deficits, new approaches have been proposed since the 1970s (evolutionary models, actor-network theory, closure-concept).

  10. 10.

    Collingridge himself mentions ‘ethics’ only in passing (Collingridge 1980: 162).

  11. 11.

    In his critique, Collingridge later provides arguments against a one-sided understanding of knowledge, e.g., knowledge reduced to factual-quantitative knowledge.

  12. 12.

    Other types of knowledge are not considered (as knowledge).

  13. 13.

    Collingridge states, “The future development of the technology cannot be foreseen in any detail. This depends upon a whole bundle of unknown factors” (Collingridge 1980: 17).

  14. 14.

    Sometimes he switches to the passive voice.

  15. 15.

    Later on in his book, Collingridge also presents arguments against both decisionistic and expertocratic–technocratic approaches (cf. Collingridge 1980:183 et sqq).

  16. 16.

    The external perspective of the control approach (with strong centralised power to approve and implement new laws and directives) would be contrasted with a shaping approach that also considers the internal perspective (with decentralised power to enable changes in certain research and development trajectories).

  17. 17.

    One can raise concerns as to whether the theoretical concepts—and the related assumptions—are evident and justifiable. (a) Is the linear innovation theory justified? Below, we will argue, from the perspective of a network approach, that the linear-causal presupposition is not justifiable. This will be in line with the technoscience thesis. (b) What can be said about classic philosophy of science and utilitarian ethics? We believe that the science system is changing and that not only can the consequences for tomorrow be assessed but also today’s intentions and (technoscientific) potentials. This view/opinion is supported by the technoscience thesis. (c) What is an appropriate understanding of “action”? It does not seem adequate just to consider external actors, such as politicians. We will argue that governance theories, including the researchers as participants and other kinds of sub-political actors on various levels, are more appropriate (cf. Liebert and Schmidt, Paper B, this volume).

  18. 18.

    An engineer may describe a machine tool in a similar manner: we look at a mechanism from an outer perspective, e.g., we look at the dynamics (temporal dimension) from the exterior, we obtain knowledge by analysing and predicting (knowledge dimension) from the exterior, and we change and control the machinery (control dimension) from the exterior. Thus, in this mechanistic way of thinking, we are not considered to be participants in the complex socio-technological development. The mechanistic view was prevalent in most classic concepts of TA.

  19. 19.

    Among them Ortega y Gasset, J. Ellul, M. Heidegger, G. Anders, H. Schelsky and others.

  20. 20.

    One can recognize here aspects of a traditional TA approach, with a bias on the pressure for decision-making and a—more or less—end-of-the-pipe approach to control technologies.

  21. 21.

    He confesses to “the normative nature of our inquiry” (Collingridge 1980: 23).

  22. 22.

    These criteria are, of course, highly relevant to TA in general. In the 1980s, there was an intense debate in Germany, for example, about the “reversibility” of technological development (regarding genetically modified food and nuclear energy).

  23. 23.

    In the same vein, Christoph Hubig (2006) argues in favour of keeping “option values” in order to preserve the ability to act in the future: The option to act is to be regarded as the core value of our “provisional morality” (e.g., Descartes) in the tradition of the European Enlightenment.

  24. 24.

    This is a strong claim advocated by Collingridge. If we take his criteria seriously, the outcome could be that new technologies will no longer be developed. Contrary to Collingridge, we will attempt to deal with the dilemma without believing that it could be eliminated completely (cf. Liebert and Schmidt, Paper B, this volume).

  25. 25.

    This articulation, the specific terminology and related conceptual works are well established today.

  26. 26.

    Experts often talk at cross-purposes, mainly because they do not reflect on their presupposed values and underlying norms. Together with his co-author C. Reeve, Collingridge presents a strong analysis of the role of experts in policy-making (Collingridge and Reeve 1986).

  27. 27.

    See also the systematic analysis of three different models of policy-making and the challenge of how to bring laypeople and experts together and initiate good deliberative processes (Collingridge and Douglas 1984). Jürgen Habermas has raised objections to both the decisionistic and the technocratic models for describing the science–technology–society–policy interface; he proposed a pragmatist model of various interactions on different levels.

  28. 28.

    Comparing Collingridge’s “Model 1” with the “Mode 1” put forward by Gibbons et al. is a task worthy of further study.

  29. 29.

    For an excellent introduction to consultancy problems regarding science and technology (policy), see Gethmann (2006) and Grunwald (2008).

  30. 30.

    They perceive a breakdown of well-established dichotomies, and a strong entanglement beyond traditional boundaries is diagnosed. Philosophical and cultural terms, notions and understanding lose their descriptive adequacy.

  31. 31.

    The finalisation model addresses questions of why and how societal norms, objectives and interests can dominate the research process during certain phases (the pre- and post-paradigmatic phases, not the paradigmatic phase).

  32. 32.

    These boundaries are no longer taken for granted. The way of dealing with the dissolution of boundaries and the implosion of dichotomies has become a matter of political dispute (Beck and Lau 2004).

  33. 33.

    Central issues in the ongoing debate are (a) the ontology of new objects (e.g., the genetically modified mouse), (b) the kind of knowledge, (c) the technical methods, (d) the goals, aims and objectives of knowledge, (e) scientific practice and technological actions and (f) the organisation and administration of the science and research system.

  34. 34.

    In other words, science, as a form of knowledge and as a form of action, is no longer the distinguishing feature for basic research but the major driver for innovation on various levels.

  35. 35.

    Theoretical aspects of science—traditionally considered the summit of science—lose their importance as the research goal. The whole process of research and development is regarded as a technological endeavour; the traditional boundary between fundamental science and applied sciences has become blurred.

  36. 36.

    For example, usingin Bacon’s terminology: light-bearing knowledge (fundamental and theoretical knowledge) in order to foster and facilitate fruit-bearing knowledge (knowledge in broader contexts of application).

  37. 37.

    In other words, purpose-driven, technology-oriented science (as action) instead of value-free, pure, fact-oriented basic research (science as a “theory form”, according to J. Mittelstraß). To some philosophers and social scientists, however, purpose-orientation is not a novel point. Scholars (of the school) of Methodological Constructivism have always stressed that science is based on certain kinds of norms that are implemented in the various methods. Purposes play an indispensable role.

  38. 38.

    It was disputable whether the STS scholars would find the black box empty or not.

  39. 39.

    STS scholars, in line with some Critical Theorists, underscore that facts and artifacts are political (“artifacts have politics”, L. Winner) and that there is not an essentialist difference between politics and epistemology. According to this view, epistemology is part of the power discourse; in a classic formulation of STS: “Truth speaks to power”.

  40. 40.

    The analytic efforts and working towards purification do not seem to have been as successful as the advocates of modernity maintained.

  41. 41.

    A radical collapse of distance takes place. The term “collapse of distance” was coined and advocated by Alfred Nordmann in various talks on technoscience.

  42. 42.

    We find a paradigm shift in framing and understanding the intersection of science–technology–society. Whereas Collingridge’s dilemma is based on an outer externalist perspective of the concepts laid out in The Social Control of Technology, it is much more common today to consider (descriptively) the prerequisites and the process described in Social Construction of Technological Systems (Bijker et al. 1987) or (descriptively as well as normatively) The Social Shaping of Technology (MacKenzie and Wajcman 1985).

References

  1. Bechmann G, Frederichs G (1996) Problemorientierte Forschung: Zwischen Politik und Wissenschaft. In: Becmann G (ed) Praxisfelder der Technikfolgenforschung. Campus, Frankfurt, pp 11–37Google Scholar
  2. Beck U, Lau C (eds) (2004) Entgrenzung und Entscheidung. Suhrkamp, FrankfurtGoogle Scholar
  3. Bell D (1975) Die nachindustrielle Gesellschaft. Campus, FrankfurtGoogle Scholar
  4. Bijker WE, Hughes TP, Pinch T (eds) (1987) The social construction of technological systems. MIT Press, CambridgeGoogle Scholar
  5. Böhme G, van den Daele W, Hohlfeld R, Krohn W, Schäfer W (1983) Finalization in science. The social orientation of scientific progress. Reidel, DordrechtGoogle Scholar
  6. Böhme G, Stehr N (1986) The knowledge society. Reidel, DordrechtCrossRefGoogle Scholar
  7. Collingridge D (1980) The social control of technology. St Martin, New YorkGoogle Scholar
  8. Collingridge D, Douglas J (1984) Three models of policymaking: expert advice in the control of environmental lead. Soc stud sci 14:343–370Google Scholar
  9. Collingridge D, Reeve C (1986) Science speaks to power: the role of experts in policy making. Frances Pinter, LondonGoogle Scholar
  10. Collins H, Pinch T (1998) The golem at large. What you should know about technology. University of Cambridge Press, CambridgeGoogle Scholar
  11. Fagerberg J, Mowery D, Nelson RR (eds) (2005) The Oxford handbook of innovation. Oxford University Press, OxfordGoogle Scholar
  12. Gethmann CF (2006) Probleme wissenschaftlicher Politikberatung in Deutschland. Newsletter 60, Europäische Akademie GmbH, January:1–3Google Scholar
  13. Gibbons M, Nowotny H, Limoges C (1994) The new production of knowledge. SAGE, LondonGoogle Scholar
  14. Gloede F (1994) Der TA-Prozeß zur Gentechnik in der Bundesrepublik Deutschland–zu früh, zu spät oder überflüssig? In: Weyer J (ed) Theorien und Praktiken der Technikfolgenabschätzung. Profil, München, Wien, pp 105–128Google Scholar
  15. Grunwald A (2008) Technik und Politikberatung. Suhrkamp, FrankfurtGoogle Scholar
  16. Hackett E, Amsterdamska O, Lynch M, Wajcman J (eds) (2008) The handbook of science and technology studies. CambridgeGoogle Scholar
  17. Haraway D (1995) Die Neuerfindung der Natur. Campus, FrankfurtGoogle Scholar
  18. Hottois G (1984) Le signe et la technique. Aubier, ParisGoogle Scholar
  19. Hubig C (2006) Die Kunst des Möglichen I. Transcript, BielefeldGoogle Scholar
  20. Hughes T (1986) The seamless web. Science, technology, etcetera, etcetera. Soc Stud Sci 16(2):281–292CrossRefGoogle Scholar
  21. Ihde D (2003) Chasing technoscience. Indiana University Press, IndianaGoogle Scholar
  22. Latour B (1987) Science in action. Harvard University Press, CambridgeGoogle Scholar
  23. Liebert W (2003) Wertfreiheit oder Wertbindung der Wissenschaft–Kritische Anmerkungen zum Wertfreiheitspostulat der Wissenschaft. In: Bender W, Schmidt JC (eds) Zukunftsorientierte Wissenschaft–Prospektive Wissenschafts- und Technikbewertung. Agenda, Münster, pp 39–61Google Scholar
  24. MacKenzie D, Wajcman J (eds) (1985) The social shaping of technology. Open University Press, MaidenheadGoogle Scholar
  25. Nordmann A (2005) Was ist TechnoWissenschaft?–Zum Wandel der Wissenschaftskultur am Beispiel von Nanoforschung und Bionik. In: Rossmann T, Tropea C (eds) Bionik. Springer, Berlin, pp 209–218CrossRefGoogle Scholar
  26. Schmidt JC (2007) Realkonstruktivismus als kritisch-materialistische Erkenntnistheorie. Über die Aktualität von Francis Bacon und seine Renaissance in Nanoforschung. Zeitschrift für kritische Theorie 24/25:67–84Google Scholar
  27. Schmidt JC (2008) Normativity and innovation. An approach to phenomena and concepts of innovation from perspective of philosophy of technology. IEEE Proc. of the Atlanta conference on science, technology, and innovation policy, pp 72–79Google Scholar
  28. Wagner-Döbler R (1989) Das Dilemma der Technikkontrolle. Edition Sigma, BerlinGoogle Scholar
  29. Weber J (2003) Umkämpfte Bedeutungen. Campus, FrankfurtGoogle Scholar
  30. Wright GH (1971) Explanation and understanding. Cornell University Press, IthacaGoogle Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  1. 1.Interdisziplinäre Arbeitsgruppe Naturwissenschaft, Technik und Sicherheit (IANUS)Technische Universität DarmstadtDarmstadtGermany
  2. 2.Fachgebiet Wissenschafts- und TechnikphilosophieDarmstadt University of Applied SciencesDarmstadtGermany

Personalised recommendations