Harry Collins has made a very visible contribution for the analysis of expertise in contemporary society (Collins and Evans 2002, 2007; Collins 1985, 2014a,b). He affirms the point made above about the difference in urgency within politics and science. Different time scales apply in policy decisions and scientific research: one is urgent; the other always needs more time to do further research.
Collins and Evans critically review the move in science studies towards relativizing faith in science, and attendant calls for an extension of scientific expertise. They argue that there are good reasons to give scientific expertise a special place in society and its decision-making procedures:
‘One of the most important contributions of the sociology of scientific knowledge (SSK) has been to make it much harder to make the claim: “Trust scientists because they have special access to the truth”. Our question is: “If it is no longer clear that scientists and technologists have special access to the truth, why should their advice be specially valued?” This, we think, is the pressing intellectual problem of the age’ (Collins and Evans 2002: 236).
Collins and Evans pose a related question about the scope of non-scientists’ participation in decision-making: ‘How far should participation in technical decision-making extend?’ Science studies have shown that there is ‘more to scientific and technical expertise than is encompassed in the work of formally accredited scientists and technologists, but it has not told us how much more’ (Collins and Evans 2002: 237).
Two important issues are to note here. One is about participation in decision-making, the other about participation in knowledge creation. Collins and Evans’s term ‘socio-technical decision-making’ seems to relate to both. However, the authors do not elaborate on the potential ambiguity of this term. Instead, they introduce another distinction, between interactional and contributory expertise. The first is defined as ‘enough expertise to interact interestingly with participants’, the second as ‘enough expertise to contribute to the science’. According to Collins (Collins 2014b: 65), one becomes a contributory expert ‘by working with other contributory experts and picking up their skills and techniques’. Interactional expertise, on the other hand, can be acquired by deep immersion in the linguistic discourse of the relevant scientific domain.
The distinction between interactional and contributory expertise makes sense if one wants to theorize the role of a scientific expert, more precisely, the role of an experimental scientist, and Collins’s own work has nearly exclusively focused on such cases. The notion of a core-set of scientists, which he introduced in his study of gravitational wave research (Collins 1985), has later been applied in his work about expertise for policy decisions (Collins and Evans 2002). What counts for him are core-set competencies that can only be acquired through practical participation on the laboratory bench. Only in this way can scientists gain the important tacit knowledge, and only in this way can scientists as experts claim to make competent statements.
The laboratory as the site of knowledge creation, and the scientific institute which signals competence of the researcher and thus makes her a ‘certified expert’ is not the only source of expert knowledge, and it is arguably not the most important one when it comes to political decision-making (see Jasanoff 2003 for a similar critique).Footnote 9 Laboratories as sources of power have been posited by several authors conducting ethnographies of laboratories in the 1970s and 80s (Latour 1983). This special focus has perhaps led to an unwarranted generalization in terms of political theory.
Another aspect of the core-set terminology is worth noting. In his book Changing Order, Collins drew attention to the fact that core-sets are not only or mainly distinguished by competencies but by gate-keeping activities. Collins argued that social factors crucially decided what counted as expertise, because there is no independent criterion available that could provide a standard for judging the competence of researchers and their knowledge claims at the frontier of new knowledge creation. Remember: Changing Order dealt with the problem of experimental replication and the attendant issue of experimenter’s regress. In Collins’s later work on expertise, the contributory quality of core-sets is contrasted with the interactive expertise of those outside the core-set. The separation is made on the basis of core competence, not on the basis of social mechanisms of gate-keeping. In Changing Order Collins was at pains to show that there was nothing inherently superior in one researcher’s quality or experimental skill compared to another. All depended on the social processes of gaining a standing within the core set. Now he holds that the sociological observer can evaluate the relative merit of expertise. This is a radical shift in perspective and not mentioned prominently.Footnote 10 Hand in hand with the switch in perspective goes the adoption of a ‘realist’ theory of expertise, such that an expert remains an expert even if no clients acknowledge their expertise. This ‘realist’ theory of expertise is opposed to the ‘relational’ view of expertise that I put forward in this paper.
Bruno Latour, like Collins, started his academic career doing laboratory studies and, like Collins, conceives of laboratories as sources of power. He does not deal with the role of the expert in society in a separate publication. In his oeuvre he mentions the notion of expertise in passing, referring to the meaning of a specialist with experience, quoting the term experitus. For him an expert seems to be a specialist who has the luxury to work in the dark, where he can commit many errors, in contrast to the politician, who has to make all errors in public (Latour 1993: 225).Footnote 11 Latour’s real attention is less on the role of experts, but on the role of science and scientists, on the role of laboratories as sources of political power, and on the similarity between science and politics.
His framework of actor-networks dispenses with explanation and only tries to describe networks, their reach, composition and growth over time. Through this description he wants to show, and critically examine, the genesis of established facts. However, established facts here pertain to a situation where despite uncertainty, and a competition of knowledge claims, a society comes to a conclusion of how to regulate a specific problem. This is why Latourian networks have no limits, and it does not make sense to speak about ‘natural’, ‘social’, ‘political’, or ‘scientific’ factors. This flat ontology does not recognize the problem of political decision-making as a discrete issue, and the role of expertise in the process.Footnote 12
This short review of laboratory studies indicates that they are a mixed blessing for the ability of the field of STS when tackling the problem of expertise in society. They have focused on scientists and their practices, and their involvement in a wider net of relations. Political and institutional analysis has been a weak point in this field. In what follows I will look at contributions from authors that have a different theoretical agenda, focusing on the question of political decision-making in modern democracies where expert knowledge is seen as vitally important.
Expertise and Counter-Expertise: The Politics of Knowledge
Back in the 1980s, analysing controversies about leaded gasoline, IQ testing, and smoking and lung cancer, Collingridge and Reeve (1986) distinguished between two scenarios where specialist knowledge and political decision-making are linked. In the first (‘under-critical model’) a policy consensus exists before research is undertaken. Scientific evidence merely legitimizes predefined policy options. In the second (‘over-critical model’) there is a succession of claims from experts and counter-experts without any agreement. Instead of a policy consensus we get endless technical debates. Collingridge and Reeve contrast several myths and realities of science and decision-making, for example, that science yields true and reliable knowledge (which they think is a myth), whereas in reality politicians use scientific information to justify their decisions. This leads them to abandon the idea that expertise is something that can be derived from the model of scientific research. Quite rightly they point to the fact that decision-makers are used to decisions under uncertainty, they do not try to collect comprehensive data before making a decision (see also Lindblom 1959).Footnote 13 Nevertheless, reference to scientific knowledge claims seems to be important because all lobby groups in a policy arena tend to use them, and because scientific knowledge has higher prestige than other forms of knowledge.
‘The role of scientific research and analysis is therefore not the heroic one of providing truths by which policy may be guided, but the ironic one of preventing policy being formulated around some technical conclusions. Research on one hypothesis ought to cancel out research on others, enabling policy to be made which is insensitive to all scientific conjectures’ (Collingridge and Reeve 1986: 32).Footnote 14
However, their distinction between two modes of decision-making (under- and over-critical) seems too rigid to cover the dynamics of the politics of knowledge. After all, sometimes we do get policy decisions after a period of seemingly endless technical debate. And there are examples where a policy consensus is undermined by emerging knowledge claims, as witnessed by constant scientific and technological innovation. While the authors make an important point that in modern societies knowledge is likely to be used in legitimizing or blocking specific political decisions, their notion of expertise is largely restricted to scientific expertise.
In their study of advisory committees Salter et al. (1988) argued that there is a significant difference between science and scientific research, on the one hand, and knowledge that is applied to solve policy issues, on the other. They called the latter mandated science, in order to draw attention to this type of science that is not the outcome of an autonomous research process (à la Merton or Polanyi), but commissioned by public agencies keen to get specific and practical advice on regulatory policy issues.
‘Our image of scientists pictures them at work in the laboratory; we seldom raise the question of how scientific information moves from the laboratory to the world of politics and policy making’ (Salter et al. 1988: 1).
The work of such committees uses all the terms we are used to hear about scientific research, such as literature review, or peer review, but the purpose of mandated science is not to produce new scientific findings. Its point is to make a judgement about multiple sources of evidence, resulting in a recommendation to a pressing problem of public policy. The authors put it this way:
‘Mandated science must be understood as a separate sphere of scientific work… Increasingly, decision makers and their publics are placed in a quandary. On one hand they are increasingly dependent on science and scientists… on the other hand it is increasingly apparent that science cannot provide the clear answers that governments seek, at least not at the time when regulatory decisions are required. Moreover, science often provides conflicting answers …’ (Salter et al. 1988: 4).Footnote 15
While the book by Salter at al. received little attention, Sheila Jasanoff’s work on advisory committees was to become highly visible, mainly through her book The Fifth Branch in which she addresses the issue of expertise, using the term regulatory science. In the opening chapter she lists three major findings from the sociology of science ‘that must be taken into account in any serious discussion of scientific advising’ (Jasanoff 1990: 12). These are: (1) Scientific facts are socially constructed; (2) Scientific paradigms and social prestige are important for problems facing advisory committees; (3) Through boundary work scientists decide who belongs to relevant professional and policy communities, thus holding up an appearance of scientific authority even in the face of uncertainty. Like Salter et al., Jasanoff draws an explicit comparison between conventional science and regulatory science. In so doing she addresses a shortcoming of Collins’s work (mentioned above). Experts are not only or mainly in their position as experts because of their technical competence but because of mechanisms of social inclusion/exclusion.
Jasanoff contrasts a technocratic with a democratic model of science advice (see also Habermas 1971; Irwin 1995) and concludes that neither ‘captures accurately what is at stake in decisions that are at once scientific and political. The notion that the scientific component of decision-making can be separated from the political and entrusted to independent experts has effectively been dismantled by recent contributions of the political and social studies of science’. And, even more strongly: ‘The idea that scientists can speak truth to power in a value-free manner has emerged as a myth without correlates in reality’ (Jasanoff 1990: 17, echoing Collingridge and Reeve). But is this the most important question to ask when trying to understand the role of expertise in decision-making? Would the science policy world be different if scientists were providing advice in a value-free manner? I suggest that it is more relevant to emphasize that the qualification and skill of the scientists is not closely linked to the decision context. Their knowledge often cannot provide the answers decision-makers request, as Salter et al. have shown.
Outlining the structure and rationale of the argument of her book, Jasanoff emphasizes the central role of scholarship from the sociology of science, beyond the ‘expected’ fields of law, political science and policy analysis. While the central role of knowledge is highlighted, the focus on its scientific nature posits a problematic, and perhaps unwarranted, assumption.Footnote 16
Writing on experts as policy advisors, Jasanoff (1990: 229) points out that ‘experts themselves seem at times painfully aware that what they are doing is not “science” in any ordinary sense, but a hybrid activity that combines elements of scientific evidence and reasoning with large doses of social and political judgment’. This statement captures perhaps the most important aspect of expertise. In a more recent paper, Jasanoff (2011: 21) emphasizes the role of the expert as a translator or mediator between knowledge and decision-making, quite akin to what I propose here. However, her concept of the expert seems to depict them as professionals: ‘it is not science per se that speaks unmediated to power, but […] the bridge between science and politics is built by experts, a cadre of knowledgeable professionals’. Occluded from this conceptualization is a kind of expertise that is not based on esoteric science, and not located within a profession. We shall now turn to a body of literature that conceptualizes the role of lay expertise.
Brian Wynne (1996) is usually credited with the insight that lay people can be experts, too. His famous study of Cumbrian sheep showed farmers exposed to the radioactive fallout from the Chernobyl accident in 1986 and the government interference in their daily farming practices. Wynne argued that while government scientists made authoritative statements about the radioactive decay in the farmers’ soil, the farmers themselves knew something else about the nature of their herds and the requirements of animal husbandry. After all, the issue was how to solve the tension between radiation risks and maintaining the farmers’ livelihoods. However, the government scientists and the farmers had specialized knowledge about different things, and both had their own interests. The local knowledge of the farmers was based on experience and self-interest, while the government experts relied on abstract models and an interest to reassure the farmers and the public, apart from keeping their monopoly status as provider of certified, reliable knowledge.
Wynne’s call for an inclusion of the farmers in decision-making, and being critical of the arrogance of state authority has proven immensely influential with scholars in STS and deliberative democracy circles. But there is a question to be asked about which social groups represented which expertise. Wynne shows how government scientists provided laboratory and abstract expertise that first gave false reassurances but then ordered a drastic restriction in movement of sheep. It is not clear what difference in policy recommendations would have been following from the farmers’ lay expertise, apart from the obvious economic aspect of financial compensation for the loss of sheep.
Wynne does provide some hints about the economic dimensions of the decisions at stake. Hill farmers were dependent on their lambs, which were raised after Easter and sold in the autumn, mainly to European markets. They needed to be sold at the right time, being neither too lean, nor too fat. Any indication or suspicion that the lambs were radioactively contaminated would massively devalue them and thus pose an immediate and direct threat to the farmers’ livelihoods. What was the right thing to do given the circumstances of radioactive contamination of soil and herds? This is a question that is never explicitly addressed in Wynne’s account. We hear about the distrust of local farmers vis-à-vis the government scientists and a history of local radioactive pollution, following the Windscale disaster in 1957, which was probably covered up by government secrecy. But given the situation post-Chernobyl, what were the options for the farmers, and for the government? What was in the public interest, and what in the farmers’ interests? Was there a difference between the two? How did the expertise of the government scientists and that of the Cumbrian farmers answer these questions? While Wynne’s account tells the story of an apparently confused and haphazard government intervention it does not tell us much about the farmers’ demands with regard to managing the crisis.
For example, Wynne relates that ‘[a]lthough the farmers accepted the need for restrictions, they could not accept the experts’ apparent ignorance of the effects of their approach on the normally flexible and informal system of hill farm management. This experience of expert knowledge being out of touch with practical reality and thus of no validity was often repeated with diverse concrete illustrations in interviews’ (Wynne 1989: 34). In addition, conflicts about the rules for compensation emerged which left the farmers embittered.
The culture and social identity of the hill farmers are mentioned as important factors when assessing the divergent ways of dealing with the risk of radioactive fallout on the Cumbrian hills. As Wynne points out, the farmers’ expertise was not codified anywhere, it was passed down the generations orally and by apprenticeship.Footnote 17
In sum then, Wynne’s point is not that the hill farmers have developed some kind of counter-expertise that would have led to different policy recommendations. Rather, he is concerned about the interests and social identity of the hill farming community. The only engagement the farmers had with government experts from the Ministry of Agriculture, Fisheries and Food highlighted its contradictory nature, leading to completely different courses of action.Footnote 18
Michel Callon (1999) has taken up the point about the patronising effects government scientists can have on citizens. He emphasizes the complementarity of expert and lay expertise, acknowledging that both are prisoners of their own beliefs but that lay experts, in addition, fear ‘that that someone else may decide for them what is good for them, and that such decisions would be taken without the slightest knowledge of their needs or wishes’ (Callon 1999: 88; see also Roszak 1969).
This is an important insight. It underlines the fact that different forms of expertise may be competing, and that specific kinds of expertise align with specific social interests. Callon mentions those affected by official expertise, a scientific or technocratic undertaking that is perceived as threatening by those affected. One could argue that these scientific experts themselves are trying to protect their interests, too. But there is a difference in that the lay experts often cannot act in a symmetrical way, and thus they do not have the same kind of control over events.
Callon’s approach would be compatible with a view that defines expertise in the way I am proposing here. In this view, both lay experts and official experts mediate between a body of knowledge and decision contexts. Both need to fight for recognition and acceptance (albeit the official expert may have an advantage in this game, especially under conditions where access to elites is important). The nature of the knowledge in both cases is not essential to the framework: it could be a body of scientific knowledge, it could be a body of practical knowledge, of tacit, or secret knowledge (see Stehr and Grundmann 2005).Footnote 19 The important aspect is that expertise mediates between a body of knowledge and its application. Scientists could fulfil this role but they are not the only group.
Wynne and Callon have touched upon the notion of stakeholder inclusion which becomes even more prominent in the framework of Post-normal Science, a situation in which ‘[s]takes are high, facts are uncertain, values in dispute, and decisions urgent’ (Funtowicz and Ravetz 1990; Ravetz 1999). This framework explicitly drops the distinction between the action-oriented urgency of politics and the absence of such pressure within science, a distinction introduced at the beginning of this paper and emphasized by Collins.
In postnormal situations science and decision-making become closely coupled. Does this bridge the gap between science and politics? Or is this entanglement a hindrance for decision-making? The authors believe that credible solutions must rely on the legitimation of public participation. Science should stop pretending that it can provide reliable knowledge for decisions. Under conditions of uncertainty and value conflict, experts and lay people are alike, and should be given equal treatment in the process:
‘Persons directly affected by an environmental problem will have a keener awareness of its symptoms, and a more pressing concern with the quality of official reassurances, than those in any other role. Thus they perform a function analogous to that of professional colleagues in the peer review or refereeing process in traditional science, which otherwise might not occur in these new contexts’ (Ravetz 1993: 649).
In a similar way to Wynne, the authors hold that official expertise and lay expertise should be seen as complimentary, not as antagonistic:
‘The Post-Normal Science approach should not be interpreted as an attack on the accredited experts, but rather as assistance. The world of “normal science” in which they were trained has its place in any scientific study of the environment, but it needs to be supplemented by awareness of the ‘post-normal’ nature of the problems we now confront’ (Ravetz 1993: 653).
Here the stakeholder engagement is modelled explicitly on the institution of peer review as practiced within scientific communities. But it is not obvious what the reason for inclusion is: is it the lay knowledge of stakeholders, or is it the democratic principle of inclusion of stakeholders, irrespective of their status as experts?
If lay people have special knowledge that is thought to be essential, or at least an important complement to the specialist (scientific or professional) knowledge, this needs to be spelled out. If, on the other hand, lay people are seen as equally entitled to make judgements in the face of uncertainty, as the experts are, then the rationale for including them is a principle of democracy.
Yearley (2000: 109) states: ‘Funtowicz and Ravetz stress that citizen involvement is proposed by them not because they are committed to the furthest extension of democracy, but because the involvement of a larger group of peers, with different kinds of knowledge, will be beneficial to the production of high-quality knowledge’. In this interpretation PNS would improve our knowledge of the world, where in another reading it would lead to better decisions. Proponents of this latter view have pointed out that there is a similar interest in policy studies which should be taken into account:
‘It is crucial now for scholars of PNS to make explicit reference to other heuristic concepts such as ‘‘deliberative policy analysis’’ (…) which emphasize the importance of governance and institutionalizing of participation and account for social construction in politics and science’ (Turnpenny et al. 2011).
In contrast to science-based notions of expertise, the contributions in this section emphasize lay expertise and lay-expert interaction and thematize the importance of stakeholder participation. However, they stop short of conceptualizing the fundamental difference between knowledge production and decision contexts, and the attendant question of the specific properties of the knowledge, ‘the type of knowledge’ that can be used for decision purposes. It does not address in sufficient detail how stakeholder representation relates to the question of knowledge production and application.
The upshot of this section is that expertise is conceptualized in largely scientific terms, or else tries to involve lay people in decision-making where it remains unclear what their expertise rests upon. Expertise is discussed either in close analogy to a scientific ideal, neglecting the role of non-scientific experts (field experts), or ideals of political participation are brought in which are modelled on scientific practices (extended peer review). Also unexamined is the relation between interests and ideas which seems to be crucial in this context.
A conceptual refinement of the role of the scientist as expert in the policy advisory process has been developed by Roger Pielke Jr. (2007). His typology identifies different roles of scientists engaged in different ways with a decision-making process. These roles are presented as pure scientist, science arbiter, issue advocate, and honest broker. He calls them experts, but on several occasions he also uses the term ‘scientists and other experts’ (without defining what these ‘other experts’ might be). The pure scientist has no interest in the decision-making process and simply wants to share information about facts. ‘The Science Arbiter serves as a resource for the decision-maker, standing ready to answer factual questions that the decision-maker thinks are relevant. The Science Arbiter does not tell the decision-maker what he or she ought to prefer’ (Pielke Jr. 2007: 2).
In contrast, the issue advocate tries to convince the decision-maker of one best course of action. Finally, the honest broker leaves it to the decision-maker to reduce the options and to make a choice: ‘The defining characteristic of the honest broker of policy alternatives is an effort to expand (or at least clarify) the scope of choice for decision-making in a way that allows for the decision-maker to reduce choice based on his or her own preferences and values’ (Pielke Jr. 2007: 2–3).
A characteristic of both honest brokers and issue advocates is an explicit engagement of decision alternatives whereas the pure scientist and science arbiter are not concerned with a specific decision, but instead serve as information resources.
Unlike the science arbiter, the honest broker seeks explicitly to integrate scientific knowledge with stakeholder concerns in the form of alternative possible courses of action.
I noted above that scientists often do not have the knowledge which could serve and justify a specific policy. This gap between what is known in scientific terms and what would be needed to know for practical purposes can be exploited on both sides of the science policy interface. Pielke Jr. (2007: 77) argues that:
‘Contemporary science policies create strong incentives for scientists to wage political battles through science by emphasizing the roles of Pure Scientist and Science Arbiter in all cases, when in fact an advocacy position is actually being expressed. At the core of the argument presented here is a critique of the longstanding expectation that science can and should be separated from considerations of the applications of science’.
Pielke argues that the social influence of experts on the policy process is strong:
‘Democracy is a competitive system in which the public is allowed to participate by voicing their views on alternatives presented to them in the political process. Such alternatives do not come up from the grassroots any more than you or I telling an auto mechanic what the options are for fixing a broken car. Policy alternatives come from experts. It is the role of experts in such a system to clarify the implications of their knowledge for action and to provide such implications in the form of policy alternatives to decision-makers who can then decide among different possible courses of action’ (Pielke Jr. 2007: 12; see Brown 2008 for a critique).
Two aspects need further elaboration here; for one, it is not clear that only experts (in the sense of scientific experts) introduce policy options into the political process. Like previous frameworks reviewed here, Pielke conceives of experts by and large as scientists of some sort. Secondly, the notion of honest brokers could be seen as misleading. It could indicate that other roles are not honest, or that some brokering is not honest. It would be a difficult judgement, depending on the merits of each case, to evaluate the honesty of such brokering. The main problem with the term, however, is the suggestion that experts as experts could somehow be independent from the decision process which they have been asked to join. Brokers make matches, select options, and suggest courses of action. In this sense experts are brokersFootnote 20, and it would be problematic to restrict their role to a widening of policy options. Perhaps Pielke does not see this because he holds on to the ideal of impartiality as a virtue of experts.Footnote 21