Abstract
Ethicisation refers to the tendency to frame issues in ethical terms and can be observed in different areas of society, particularly in relation to policy-making on emerging technologies. The turn to ethics implies increased use of ethics expertise, or at least an expectation that this is the case. Calling for experts on ethics when ethically complicated questions need to be handled helps us to uphold central virtues, but there are also problems connected with ethicisation. In policy-making processes, the turn to ethics may not always be a sign of a sincere aspiration to moral performance, but a strategic move to gain acceptance for controversial or sensitive activities, and ethicisation may depoliticise questions and constrain room for democratic participation. Nevertheless, ethicisation, and the ensuing call for ethics experts, suggests an expectation of confidence in ethics and ethics expertise, and that ethical guidance is an effective way of governing people’s behaviour in a morally desirable way. The purpose of this article is to explore democratic and epistemic challenges of ethicisation in the context of emerging technologies, with a specific focus on how the notions of under-reliance and over-reliance of ethics expertise can unpack the processes at play. By using biotechnology and the EU process of bio-patents and the publication of ethical guidelines for AI development as illustrations, it is demonstrated how ethicisation may give rise to democratic and epistemic challenges that are not explicitly addressed in discussions on the political use of ethics expertise.
Similar content being viewed by others
Explore related subjects
Find the latest articles, discoveries, and news in related topics.Avoid common mistakes on your manuscript.
Introduction
The purpose of this article is to explore democratic and epistemic challenges of ethicisation in the context of emerging technologies, with a specific focus on how the notions of under-reliance and over-reliance of ethics expertise can unpack the processes at play. By using biotechnology and the EU process of bio-patents and the publication of ethical guidelines for AI development as illustrations, it is demonstrated how ethicisation may give rise to democratic and epistemic challenges that are not explicitly addressed in discussions on the political use of ethics expertise.
EthicisationFootnote 1 refers to how technological and other issues are commonly framed as ethical, and how ethics is perceived to be a tool to resolve conflicts of interests, dilemmas or controversies (Bogner 2010; Cavaggion 2019). A tendency to frame scientific and technological phenomena in ethical terms can be observed in different areas in society; some even call it a hegemonic trend (Bogner 2009; Petersen 2011). For instance, in the wake of gene technological and other biotechnological advances in the twentieth century, the field of bioethics emerged as a response to ethical dilemmas and controversies that this development gave rise to (Evans 2002, 2012). As a consequence, bioethics expertise is called for in an advisory function as well in the clinic (Fox et al. 2007, p. 19) as at the policy level (Trotter 2002). Medicine may be the most obvious field to deliberately reflect on and act upon the ethical consequences of its research and practice, but ethical dimensions are addressed in practically all areas in society, such as engineering (Bowen 2014), accountancy (Kumarasinghe et al. 2021) and education (Falkowski and Ostrowicka 2021); professions in general have their own ethical codes (Illinois Institute of Technology); public authorities and private corporations produce policy documents outlining the moral values on which their activities are based and their employees should adhere to (Andersson and Ekelund 2021); and policy-makers ask for ethical input on morally sensitive issues, especially so in the context of regulation of new technologies (Bogner 2009).
A case that has left its mark in this regard is the controversial question of biological patents in the European Union (EU) in the late twentieth century, that was resolved only when ethics expertise was called for (Tallacchini 2015; Busby et al. 2008). A recent example that to some extent draws on the bio-patent case in its attitude to the role of ethics is the ongoing process on regulation of artificial intelligence (AI) in the EU, a process in which input from ethics expertise was the point of departure (EGE 2018). The publication of ethical guidelines for AI development by both public and private entities (Jobin et al. 2019) is another example of how ethical dimensions are addressed in relation to AI development; the last five years have witnessed the dissemination of some hundred such guidelines.Footnote 2
Apparently, essentially different areas, facing dissimilar problems, increasingly find ethics as a guide to acquire morally better solutions to ethical problems, but also to problems that may involve other aspects or to issues that may not primarily be ethical in character.Footnote 3 As an implication, moral philosophers and other ‘ethics experts’ are frequently asked for advice and may exert influence on ethical guidelines and policies that are supposed to govern peoples’ behaviour in a desirable way (Trotter 2002). Calling for experts on ethics when ethically complicated questions need to be handled is undoubtedly a good thing, helping us to uphold central virtues (cf. Tiesenkopfs et al. 2019), but there are also problems connected with ethicisation. The turn to ethics may not always be a sign of a sincere aspiration to moral performance, but can also be a strategic move to gain acceptance for controversial or sensitive activities (Hagendorff 2020; Busby et al. 2008; Littoz-Monnet 2015); ethicisation may depoliticise questions and thereby constrain room for democratic participation (Urbinati 2014; Hedlund 2014); and defining questions as questions of ethics may individualise structural problems in a way that removes them from political consideration (Petersen 2011). Nevertheless, ethicisation, and the ensuing call for ethics experts, suggests an expectation of confidence in ethics and ethics expertise, and that ethical guidance is an effective way of governing people’s behaviour in a morally desirable way. However, this confidence can also be the reverse of the ethical medal, as too little or too much confidence may give rise to epistemic and democratic concerns.
As will be demonstrated, a further implication with ethicisation is two interrelated problems with confidence that are only implicitly addressed in the ethics expertise literature. The first problem is what I denote under-reliance of ethics expertise, namely the risk that ethics advice is asked for, but for some reason is not listened to. Ignorance or resistance among those who are supposed to implement the ethics or lack of mechanisms of reinforcement can sometimes explain why ethics advice is not put into effect, but ethics may also be used strategically to get acceptance for controversial or sensitive activities and thereby run the risk of being perceived as something like an ethical alibi. The second problem is what I call over-reliance of ethics expertise and would occur when the fact that ethical advice has been provided sends the message that the issue at hand is exhausted, with the risk that other urgent aspects will be overlooked. Both under-reliance and over-reliance constitute epistemic as democratic challenges with the political use of ethics expertise. From an epistemic point of view, the confidence in ethics expertise could be challenged, whereas in different ways. From a democratic point of view, under-reliance of ethics expertise may give rise to legitimacy concerns, while over-reliance may narrow the elucidation of an issue and, as an effect, limit the event space for potential alternative interpretations.
To illustrate how ethicisation may give rise to problems of under-reliance and over-reliance of ethics expertise in the context of emerging technologies, I will look into two cases with potentially huge implications for society: biotechnology and AI. Both biotechnology and AI open up the prospect of enhancing and even creating life, and so give rise to existential questions of life itself—the metaphor ‘playing God’ is used as criticism as well towards gene technology (Evans 2002) as towards AI (Gent 2015)—and so bring ethical issues to the fore. And even though living machines are highly unlikely in the foreseeable future, AI technology gives rise to intriguing questions of safety, privacy, discrimination and other issues that are frequently addressed in ethical terms.
In the following, I clarify what I mean with ethicisation and reliance of ethics expertise, define a concept of ethics expertise and problematise the use of ethics expertise in democratic decision-making. Next, I discuss epistemic and democratic worries that are commonly brought up in the experts and democracy literature, and argue that ethicisation requires that we pay attention also to under-reliance and over-reliance of ethics expertise. After that, I situate biotechnology and AI in the context of ethicisation, and analyse the two cases with respect to under-reliance and over-reliance of ethics expertise. Finally, I discuss how under-reliance and over-reliance of ethics expertise add to the democratic and epistemic worries that the political use of ethics expertise gives rise to.
Some Initial Clarifications
‘Ethicisation’ is used with different meanings and can denote, for instance, the invocation of immaterial values in constitutional law (Cavaggion 2019) or the process of working out a code of ethics in an organisation (Carroll 2015). In this paper, ethicisation is understood as the tendency to frame problems as ‘ethical’ at the (possible) expense of other aspects, such as economic, legal or political aspects, and more specifically to the tendency of policy-makers to frame issues with emerging technologies in ethical terms.
Ethicisation should not be confused with reliance on ethics expertise. Whereas ethicisation is about the tendency, or trend, to turn to ethics, reliance on ethics expertise could be seen both as a presupposition for ethicisation—without reliance on ethics expertise, policy-makers might not turn to ethics to begin with—and an effect of ethicisation: the fact that problems are defined as ethical problems is reason to expect that experts in ethics are involved. It is this latter aspect of reliance on ethics expertise that is of main interest in this study.
Reliance on ethics expertise is thus in this paper primarily studied as an effect of ethicisation and is understood as the ‘neutral core of trust’, meaning that we ‘rely on someone to do or ensure something when we judge them to have the relevant competence, motivation, and opportunity’ (De Fine Licht and Brülde 2021). Hence, reliance on ethics experts is about them having the relevant competence in ethics, i.e., that they are in fact ethics experts; that they are motivated to do what they are expected to do in a given situation, e.g., providing clarifications on ethical matters; and that they have the opportunity to do so, e.g., that they are asked to act as ethics experts in a governmental committee. While not denying that reliance on ethics expertise is a condition for ethicisation in the first place, ethicisation also has effects on reliance on ethics expertise. One such effect is that ethicisation can lead as well to under-reliance as to over-reliance of ethics expertise. This is the central focus of this paper, which focuses not on why ethicisation occurs, but on how ethicisation affects reliance on expertise and the epistemic and democratic challenges that this gives rise to.
Depending on how the turn to ethics is played out, ethicisation can lead to under-reliance or over-reliance of ethics expertise. Under-reliance of ethics expertise points to situations when ethics advice is called for but not necessarily taken into account. In such cases, ethics may have been used as legitimation, to send signals of confidence or to put the public at ease. In other words, under-reliance depicts situations when ethics becomes an alibi for controversial decisions or lines of action. Certainly, this presumes strategic moves of carelessness regarding the impact of ethics, but under-reliance can occur also without anyone intending or aiming for it. Over-reliance, on the other hand, applies to a prioritisation of ethics that makes it difficult for other perspectives to come through, with the risk that important aspects will not be regarded. The extent to which over-reliance occurs depends on the scope of the commission given to the ethics expertise. When ethics is defined broadly and includes, for instance, socioeconomic or redistributive aspects of the issue at hand, the risk that such aspects are overlooked is smaller compared to when ethics is more narrowly defined. On the other hand, an extensive definition of ethics suggests that more aspects will be dealt with by the ethics experts and thereby not necessarily give room for citizen concerns (Littoz-Monnet 2021, p. 31). Hence, over-reliance may give rise to democratic concerns in different ways. In cases when the input of ethics is not provided by experts in ethics, but this is to be expected, we could face as well under-reliance and over-reliance of ethics expertise: under-reliance insofar as the outcome of the job is deemed inferior; over-reliance insofar as the expectation is that ‘proper’ experts are doing the job. What then, is a ‘proper’ ethics expert? What do we mean by ethics expertise?
Ethics Expertise
Ethics expertise has emerged as a category of specialised knowledge to be used in value-based questions, and we could expect decision-makers to resort to ethics expertise in situations when it is impossible to reach an agreement on controversial and sensitive policy choices via democratic processes (Littoz-Monnet 2015). As for the special kind of expertise that scholarly philosophers contribute, we need to consider whether we could equate the expertise of experts in ethics with the expertise of experts in climatology, computer science, epidemiology or other scientific disciplines. Philosophers themselves tend to disagree on the matter (Moreno 2006), but considering the fact that something designated ethics expertise plays a role in policy-making processes, it is important to clarify what this term refers to. While the specific expertise required to be denoted an ethics expert in, for instance, clinical settings is a contested issue (Scofield 2018), scholarly training in moral philosophy and certification by philosophical and bioethical institutions arguably are signs of some expertise in ethics, and practical training in applied ethics could confer the expertise required to be considered an ethics expert (Sanchini 2015).
To some degree, the scholarly debate on the societal role of moral philosophers has been formulated in terms of the possibility of such expertise (e.g., Tong 1991; Parker 2005; Sanchini 2015). However, this way of putting the issue is partly misguided, as the problem is not whether moral philosophers could be experts—they certainly can, on many things. Rather, the debate concerns their role as advisers on questions of morality, and their alleged expertise in knowing the right morality. Supposed moral expertise would refer to the expertise in making judgements about right and wrong in an absolute sense. So understood, a moral expert is only possible if moral judgements are objective. This is a position of moral realism, holding the view that there are knowable and objective moral truths (Dancy 2011), a position that is widely disputed (see, e.g., Sinnott-Armstrong 2019). However, as Waldron (1999, ch. 8; see also Yoder 1998) argues, even if moral realism is correct, the fact of moral disagreement makes it impossible to identify the moral experts and hence, moral realism is irrelevant. Thus, the problem with the debate is how expertise is (implicitly) understood. As long as ethics expertise does not refer to some kind of moral expertise (cf. Friele 2003), ethics expertise is possible in the same way as epistemic expertise.Footnote 4
Drawing on Hedlund (2014), I contend that it is fruitful to make a distinction between on the one hand ethics expertise, referring to expertise in ‘providing systematic analysis of ethical concepts and positions, presuppositions of such positions and the relations and the distinctions between them’ (2014, p. 285) to illuminate thinking and to encourage an informed ethical debate (Nussbaum 2002; Brock 2006), and on the other hand moral expertise, referring to expertise in evaluating the rightness of moral judgements. Such a distinction recognises the ethics expert as the counterpart to the expert in epidemiology or climatology. They are all examples of ‘specialists in a well-delimited and commonly accepted competence area’ (Hedlund 2014, p. 284), and have or are perceived to have cognitive authority (Turner 2003). Further, this authority is based on an ability to justify claims ‘above and beyond the sphere of subjective opinion and belief’ (Grunwald 2003, p. 111). If we also apply this way of encircling the characteristics of ethics expertise, the ethics experts possess knowledge by virtue of which they can speak with authority as to which conclusions follow from different moral theories without taking a stand on which of these conclusions is preferable, have expertise in clarifying and illuminating moral problems and in presenting alternative positions and justifications for those positions, that is ‘clarifying expertise’ (Hedlund 2014, p. 288). So understood, ethics experts can contribute to stabilising and legitimising disagreements in policy contexts (Bogner 2010).
While we can conclude that it is reasonable to accept the notion of ethics expertise, the way that ethics experts are used in policy-making on emerging technologies may give rise to epistemic and democratic worries of the same kind as the use of epistemic expertise. Next, some such worries will be outlined.
Epistemic and Democratic Worries
Like all of us, experts can make cognitive mistakes, and they may be driven by ideologies or biases, pointing at the epistemic worry whether experts contribute to more rational and informed decisions. However, this worry builds on the assumption that policy-makers use expert knowledge in a rational way (Boswell 2009). Whatever the quality of the expert advice, just because knowledge and research results are made known to decision-makers does not mean that it will be incorporated in political decisions. For instance, despite the vast knowledge about the need to reduce emissions of CO2 into the atmosphere to put a break on global warming, emissions continue to increase.Footnote 5 Political decision-making is an act of balancing different interests and values (Douglas 2009), and climate change is just one example that illustrates that there are also other considerations than facts and knowledge that political decision-makers need to attend to. In addition, decision-makers sometimes make use of expert knowledge symbolically, to send signals of rationality, as ammunition in contested issues, or to substantiate a position that is already taken (Boswell 2009). All this contributes to the epistemic worry about the rationality of political decisions (although given the logic of politics, this is to be expected), but it also gives reason to democratic worries.
From a democratic point of view, the political use of expertise may primarily have implications for equality (Turner 2003). When experts participate in democratic processes, there is a risk that their involvement may go beyond the ‘objective’ conveyance of ‘speaking truth to power’ (Wildavsky 1979). Personal values and preferences might affect the professional judgements of experts or give certain values a fact-like status (Dzur 2008; Evans 2006), and because of superiority in their specialist field, experts have good opportunities to define the problems to be considered. As is widely recognised in policy theory, problem definition is an influential tool to guide the direction of the policy process (Barbehön et al. 2015). Talking about ethics experts, it is not improbable that they have a propensity to call attention to ethical aspects of problems.
While I do not intend to downplay the importance of ethics and the role of ethics experts to help decision-makers to orient themselves in the morality of policy issues, ethicisation points to something slightly different, namely, as outlined above, the tendency to frame questions as ‘ethical’, with the risk that other perspectives will be omitted. This is not to say that there are no ethical aspects in policy issues, but if policy issues are understood solely in moral terms, there is a risk that societal and structural aspects are overlooked (cf. Dowding 2020). As I will show in the analysis of biotech and AI, disregarding certain facets of an issue is an important part of the problem of over-reliance of ethics expertise.
Calling for ethics expertise in political decision-making processes warrants attention also from a democratic point of view. As outlined in Hedlund (2014), an advisory role of ethics experts in policy-making processes gives rise to some particular issues. One worry concerns the case when advice implies ethical recommendations, meaning that ethics expertise is in fact expected to act as so-called moral expertise, as these concepts are defined here.Footnote 6 Giving recommendations is a normative endeavour, and that is a problem for democracy for the same reasons as it is a problem for democracy if any experts have more say on normative considerations than do non-experts. In a democracy, we should expect value questions to be dealt with in public deliberations in which different positions are settled on the basis of democratic principles, not to be answered by expertise. Although recommendations are just that, and not decisions, if decision-makers ask for ethical recommendations, it is not unreasonable to believe that they will base decisions about value questions on expert authority, that is, on alleged moral expertise. From a democratic perspective, the problem with this would be twofold: firstly, the notion that questions of value could be resolved by expert knowledge, and secondly, the supposition that there is something like moral expertise that could provide the right value.
Ethicisation in the context of emerging technologies adds to these epistemic and democratic worries by giving rise to under-reliance and over-reliance of ethics expertise, which will be demonstrated by the cases of ethicisation in biotechnology and AI.
Ethicisation in Biotechnology and AI
Biotechnology and AI are particularly interesting from the perspective of ethicisation, not only owing to the great emphasis on ethical aspects of these fields in policy and other contexts, but also due to the special characteristics of these technologies. Biotechnology can be described as the integration of natural sciences and engineering sciences that makes use of living organisms to develop or create different products, which gives hope to providing solutions to urgent societal problems such as curing diseases (Wahlberg et al. 2021) and mitigating climate change (Show et al. 2021). With the possibility to make changes in genetic material, biotechnology has advanced rapidly, but the modifying of genes in living organisms, including human embryos, has raised concerns and criticisms (Lima and Martínez 2021). The question of patenting genes in the late 1980s and 1990s in the EU is an example of how public worry puts ethical values up front.
AI is commonly defined as non-organic systems that can think and act rationally and similarly to humans (Russell and Norvig 2010).Footnote 7 More specifically, AI can be described as ‘systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals’ (AI HLEG 2019, p. 1). These understandings point to characteristics of AI that distinguishes it from other technologies: its ability to ‘learn’ and to act autonomously.Footnote 8 While these attributes have given rise to a lively discussion about AI agency and implications for responsibility (Laukyte 2017; Gunkel 2017), this paper focuses on how implications of AI technology—agentic or not—are framed as ethical.
It will now be demonstrated how ethicisation of biotechnology and AI may lead to under-reliance or over-reliance of ethics expertise.
Under-Reliance of Ethics Expertise
The risk that ethics advice is asked for but not taken into account is a case of under-reliance of ethics expertise. This was present in the process of bio-patenting in the EU at the end of the twentieth century. Worries about competitiveness vis-à-vis biotech industries in Japan and in the US had put ‘enormous pressure on the EU to harmonise the disparate provisions of its member states concerning biotechnology patents’ (Jasanoff 2005, p. 219). Originally proposed in 1988, Directive 98/44 on legal protection of biotechnological inventions was not adopted until 1998, and the main reason for the extended process was the controversial ethical issues concerning patent law and biotechnology (Busby et al. 2008). Citizens worried about patenting of human biological material, and in several member states there was a critical debate and objections against the extent of the protection of gene patents (Jasanoff 2005).
In response to the intense debate, the Commission established the Group of Advisers to the European Commission on the Ethical Implications of Biotechnology (GAEIB), later replaced by the European Group on Ethics in Science and New Technologies (EGE). The task of GAEIB/EGE was to advise the Commission on the ethical aspects of biotechnology and to keep the public informed (EC 1997). The group did, however, only get the limited role to provide guidance on basic and general ethical principles, not on particular inventions or patents (Plomer 2006, p. 118). Moreover, the European Patent Office (EPO) pointed to methodological weaknesses in the recommendations, such as leaning on opaque concepts, and failure to advert to the pivotal distinction between human embryonic stem cells and other type of stem cells (Plomer 2006, p. 199). Nevertheless, the EGE came to play a crucial part in the realisation of the directive on biotechnology patents (Mohr et al. 2012; Busby et al. 2008). As put forward by Busby et al., ‘without the ethical stamp of approval from the GAEIB/EGE, Directive 98/44 might never have been adopted’ (2008, p. 814). In other words, the ethics expert group was needed to legitimate the patent directive. In that respect, it could be argued that ethics expertise was used symbolically.
When technical expertise is used to legitimate a political decision, it has the symbolic function of demonstrating rationality, especially in contested issues with a high level of salience (Boswell 2009). The patent issue was no doubt a contested issue with a high level of salience, and the public struggle was about moral values. For the European Commission, explicitly addressing ethical challenges would facilitate acceptance of the benefits of biotechnology and thereby ensure a single market for its products (Tallacchini 2015; Busby et al. 2008). The mandatory procedural step to consult the EGE whenever a directive touches upon values provided an ‘aura of democratic legitimacy’ (Tallacchini 2015, pp. 164–165), and the ‘appropriate advisory structure’ (CEC 1991, p. 18) provided by the establishment of GAEIB/EGE could fill the symbolic function of demonstrating ethical rationality. However, ‘ethics’ was loosely defined (Jasanoff 2005), and the kind of expertise required for appointment to the group was not necessarily scholarly expertise in ethics (Plomer 2006, p. 122). Rather, members should be ‘recognized experts’ (CEC 1991, p. 16), serving in a personal capacity and independently from any outside influence (Plomer 2006, p. 122); among the then six members, one was a philosopher, and the others came from law and genetics (Jasanoff 2005).Footnote 9 Both the criteria for appointment and the independence of the group have been questioned (Plomer 2006, pp. 123–125), and there have been concerns that ethics committees of this kind are prone to ‘political capture’ (Plomer 2006, p. 126). While inclusion of only ethics experts would arguably imply a larger risk of certain aspects being overlooked than if the group was more diverse, as in this case, the very commission to consider the ethical aspects delimits the aspects to be handled.
All this indicates a risk that ethics may have been used as an alibi for contested issues, which is a sign of what I denote as under-reliance of ethics expertise. The emphasis of the importance of including ethics together with the fact that being an ethics expert was not a requirement, indicates that the Commission did not rely on ethics expertise to the extent that they could have done. On the one hand, the explicit task was to consider ethical aspects. On the other hand, the group designated this task was not entirely composed of ethics experts, as this notion is used in this paper. This suggests that the Commission was not primarily interested in the best possible ethical advice, but that the aim could have been to bypass or tame the public unease with bio-patents (cf. Littoz-Monnet 2021).
Ethics as alibi for risky endeavours is also observed in the rapidly developing field of AI, which besides the many advantages and streamlining it provides to society, also gives rise to serious concerns such as safety (Juric et al. 2020), responsibility (Hedlund 2022; Persson and Hedlund 2021), privacy (Carmody et al. 2021), discrimination and inequality (O’Neill 2016), CO2 emissions and depletion of minerals and other natural resources (Crawford 2021) and the power of global companies (Zuboff 2019). This has triggered actors from academia, professional communities, politics and business to turn to ethics to ensure that AI is ‘deployed in a manner that respects dearly held societal values and norms’ (Rességuier and Rodrigues 2020, p. 2), but it has also been suggested that the discourse of ethical AI is used strategically ‘to avoid legally enforceable restrictions of controversial technologies’ (Ochigame 2019). In any case, we are witnessing frequent publication of ethics guidelines on the development and application of AI systems.
Guidelines are a form of soft law that can have various functions such as to codify common practices or to change professional norms, and may or may not have a substantial influence (Sossin and Smith 2003). According to the inventory by Algorithm Watch, 173 ethical guidelines on AI were established by April 2020, and given that, for instance, China (Houweling 2021) and UNESCO (UNESCO 2021) among others since then have published ethics guidelines, we could expect that the total number of ethics guidelines on AI at the global level is close to or exceeds 200.Footnote 10 No matter the exact number of ethics guidelines, the point is that harmful consequences with this emerging technology is framed in terms of ethics. What effects would this ethicisation have on reliance of ethics expertise?
Certainly, the many different guidelines can be confusing or even give rise to ‘ethics shopping’ (EGE 2018, p. 14), but as pointed out by Hagendorff (2021), many of the AI ethics guidelines build on previously published guidelines and thereby echo one another regarding the topics included and how they are approached. Thus, the guidelines are creating a consensus on which ethical values should guide AI development and implementation (Fjeld et al. 2020; Fukuda-Parr and Gibbons 2021). However, as there is a lack of enforcement mechanisms to reinforce the normative claims, AI ethics guidelines hardly have any influence on behavioural routines of practitioners (Hagendorff 2020, 2021; Rességuier and Rodrigues 2020). Weak enforcement may also be a reason for why ethics is so appealing to many companies and institutions, which formulate their own guidelines in an effort to evade regulation and suggest that self-governance is sufficient (Hagendorff 2020; Rességuier and Rodrigues 2020).
This approach to ethicisation can be seen as a way to use ethics to legitimate an ongoing business, and without requirements to really follow the guidelines, ethics is arguably used as an alibi for a large-scale, profit-generating business that is not solely beneficial for society and for which vulnerable groups pay a higher price than already favoured groups. Ethics as an alibi is clearly a case of under-reliance of ethics expertise, whether ethics experts are actually involved in the putting together of the guidelines, or not. In the former case, qualified ethical considerations are not put into practice. In the latter case, ethics may not have been taken seriously in the first place; the endeavour of AI business actors to publish their own ethics guidelines could be a case of both.
In fact, some of the ethical principles suggested in the guidelines do play a role in practice. For instance, the almost ubiquitous principles of transparency and explainability, meaning that decisions taken by autonomous systems should be comprehensible for humans, is a question that AI developers are paying a lot of attention (Fjeld et al. 2020). While this may be promising from an ethical perspective, there are also some issues with this practice. However desirable the principles of transparency and explainability may be for AI development and implementation, they are examples of ethical values that are most easily operationalised mathematically and thus can be implemented by technical means (Hagendorff 2020, 2021; Greene et al. 2019). Since this itself need not be a problem, it is problematic if technical feasibility decides which ethical values are regarded, and how (Greene et al. 2019). One reason for concern is the risk that existing practices of resolving technical problems in AI research and development that are done anyhow are presented as ethical measures (Hagendorff 2021). Supposing that would be the case,Footnote 11 it would be an example of how ethics is used as an alibi, which arguably may lead to under-reliance of ethics expertise. Another possibility is that the ethical framing is too weak, allowing for too much influence of industry interests. Still, we have a case of under-reliance of ethics expertise, although not based on the ethical framing as such. However, as ethicisation has become a main approach to deal with controversial emerging technologies, it could be presumed that concerned actors adapt to this practice and learn to formulate their issues in ethical terms. If non-experts in practice are behind allegedly ethical recommendations, there clearly is a risk of under-reliance of ethics expertise. Another worry, that will be further elaborated below, is the risk that these very values will be the only ones considered, at the expense of other important values that are not so easily met with some technical fix.
Transparency and explicability, like other ethical values promoted in most ethical guidelines (e.g., privacy, fairness, trust), can also be criticised for how they are represented in the guidelines, and regarding downsides of the principles as such. Criticism can also be directed at the guidelines for omitting aspects of AI development and implementation that are harmful for society, aspects that arguably would be included in frameworks that are labelled ‘ethical’. I will now illustrate how these shortcomings can reveal how ethicisation may lead to over-reliance of ethics expertise and run the risk of delimiting the space for public deliberation, thereby constituting a challenge for democracy.
Over-Reliance of Ethics Expertise
As numerous reviews of AI ethics guidelines show, there appears to be an emerging consensus on which ethical values to address. Although other values also occur, a common denominator seems to be: transparency and explainability, fairness and non-discrimination, privacy and trust (Hagendorff 2021). Notwithstanding that each of these values are relevant and important, ethical guidelines on AI are criticised for approaching them in a manner that ‘limits what ethics can achieve’ (Rességuier and Rodrigues 2020, p. 1).
Consider explainability. While it is reasonable that individuals obtain explanations of autonomous decision-making processes, especially in the event of unwanted consequences, it is not evident how such explanations should be constructed to be meaningful to the user. Moreover, as pointed out by Hagendorff (2021), the value of explainability could be questioned. A mere description of a causal process in a technical artefact would most probably require further explanation. Additionally, an explanation is not a justification whether a decision is appropriate or acceptable. Finally, even if perfectly explainable AI systems were possible to achieve, that would not ensure that they were not then used for unethical purposes. These issues are seldom mentioned in AI ethics guidelines.
Or take algorithmic bias, referring to how machine-learning systems discriminate based on patterns in the data that the systems are ‘trained’ on (Johnson 2021; Eubanks 2018). The problem of biases in training data is commonly met by pointing to the need to more complete and diverse datasets, but larger datasets could also make surveillance easier and increase the violation of privacy, another value that is recurring in AI ethics guidelines. However, how conflicts between different values should be resolved are rarely addressed in these documents (Jobin et al. 2019).
While criticism of the guidelines within academia may be a sign of under-reliance of ethics expertise from agents who are themselves knowledgeable or experts in ethics, there are other factors that speak for over-reliance. Although ethics guidelines on AI have received a lot of public attention, criticism from academic society may not always reach the public. Furthermore, the guidelines are presented as guidelines provided by experts. For instance, it is not implausible that an audience assumes that the experts in a ‘group of independent experts’ (EC 2019), as the European Commission put it when they announced the publication of its ethics guidelines on AI, have the relevant expertise. In this context, apart from expertise in AI, expertise in ethics should be expected. Moreover, these guidelines constitute a basis for the Commission’s continuing work towards regulation of AI (EC 2021), indicating that they will come to play some role in development and implementation of AI within the EU. Whereas some ethical values discussed in the guidelines are insufficiently considered, the—reasonable—expectation that ethics expertise has played a (considerable) role in the elaboration of ethics guidelines that will influence potential legislation could lead to over-reliance of ethics expertise.
In addition, there is a risk of over-reliance on ethics expertise from the perspective of the European Commission (in this case), but in another way. As we have seen, ethicisation of an issue does not necessarily lead to the requirement that ethics experts, as this notion is understood here, should do the job, and many of these guidelines are not produced by ethics expertise or at least not mainly by ethics expertise. For instance, of the 52 members of the Commission’s AI High Level Group on Ethics, only three were scholarly ethicists.Footnote 12 This could be a sign of over-reliance of ethics expertise insofar as a small minority of ethicists was expected to be able to provide all the necessary ethics considerations. Certainly, we cannot know to what extent these ethics experts really had an influence on the group’s work, but considering that a majority of the members of this group came from the AI industry and/or were experts on technological aspects of AI (European AI Alliance, 2022), it is not unlikely that the ethics experts did not dominate the deliberations.
Over-reliance of ethics expertise could also be the case when the occurrence of ethics recommendations or guidelines gives the impression that the consideration of an issue is exhausted. As several reviews of AI ethics guidelines have pointed out, many relevant values are rarely discussed or not mentioned at all. The point of departure is how AI technology could be made ‘ethical’ to be accepted and trusted in society, not the other way round, that is, how AI technology could be used to attain a peaceful, sustainable and just society, or whether these systems should be developed or applied in the first place (Hagendorff 2021). Moreover, the locus of ethical scrutiny is the technical design of AI, not the business of AI (Greene et al. 2019).
One thing that is rarely paid attention to is how dependent the development of AI is on manual work (Crawford 2021). For instance, datasets that are used to ‘train’ AI systems by supervised machine-learning methods need to be prepared by humans (Hagendorff 2020). This is dull, exhaustive labelling work, often performed by low-wage labour at clickwork factories, where ‘working conditions are as bad at the market tolerates’ (Hagendorff 2021, p. 8). Such negative effects on third parties are rarely touched upon in the ethical guidelines (Hagendorff 2021).
Another aspect that is not always recognised is how AI development affects the environment. In their review of 84 ethical guidelines, Jobin et al. (2019) observed that 70 guidelines did not include any principles on environmental sustainability. Although the notion of ‘cloud’ computing suggests a lack of materiality, the material conditions for AI are significant and AI technologies contribute considerably to the carbon footprint and environmentally harmful waste (Hagendorff 2021). From the perspective of ethics, it is noteworthy that these aspects do not occupy more of the guidelines.
Indeed, several other important implications of AI are pointed out as rare or absent in the ethical guidelines,Footnote 13 but for the question of reliance on ethics expertise, the examples mentioned here may be sufficient to illustrate that just because some experts have provided ethics guidelines, this does not necessarily mean that all relevant issues have been considered. It could be objected that concerns of aspects such as working environment and climate are not specific to AI and are covered by other guidelines and regulations. However, presented as ethics guidelines, they may give the impression that the ‘ethics’ of AI is exhausted, implying that climate, working environment and other omitted concerns are not ‘ethical’ in character. As mentioned, issues of technical design get much attention, while questions on social situatedness are rarely discussed, and even if all (technical and other) requirements of AI ethics principles would be fulfilled, AI applications can still be used in ways that are harmful for the environment and for people. These are arguably ethical questions relating to AI development, the omission of which could lead to over-reliance on ethics expertise.
If important aspects of the downside with AI are omitted in ethics guidelines intended to direct the development and application of the technology, and if we assume that the elaboration and formulation of these guidelines give room for ethics expertise, then there is a real risk of over-reliance on ethics expertise. On the other hand, if the ethics is shallow or narrow, as the omissions imply, there is reason to ask who is doing the ethics. Although we would expect moral philosophers and other scholarly ethicists to provide the ethical reasoning, the fact that many of the guidelines are provided by the AI industry and AI professionals (Hedlund 2022), and that a public institution such as the European Commission includes only a fraction of ethicists in its expert group with the task to provide ethics guidelines for AI, may be a sign of something else. However, the expectation that proper ethics experts stand behind the ethical reasoning, together with the insufficient or, sometimes, inadequate depth and ‘teeth’ (Rességuier and Rodrigues 2020) of the AI ethics guidelines, gives room as well for under-reliance as for over-reliance on ethics expertise.
While both under-reliance and over-reliance on ethics expertise give rise to democratic concerns, it should be noted that in the EU process behind the development of the AI guidelines, there was an explicit ambition to include actors from different areas in society ‘to allow for broad and open discussion of all aspects of AI development’ (EC 2018), and within the guidelines it is recommended that ‘stakeholders are involved throughout the AI system’s life cycle’ (AI HLEG 2019). Such processes open up the possibility to raise other concerns and potentially impact policy-making discussions. In that respect, the EU ethical guidelines are not the final word about principles for AI development. However, as pointed out above, problem definition is a powerful tool to guide the direction of a policy process, and the initial framing of AI development in ethical terms has to a certain extent set the stage for those who enter at later stages.
Conclusions
This paper has demonstrated how ethicisation in the context of emerging technologies adds to epistemic and democratic challenges with experts in policy-making. Ethicisation, understood as the tendency to frame problems as ethical issues, implies increased use of ethics expertise, or at least an expectation that this is the case, and I have showed how ethicisation may lead to under-reliance and over-reliance of ethics expertise. Under-reliance of ethics expertise was discussed in the case of bio-patents, in which turning to ethics became a symbolic strategy when questions of patenting body parts met resistance, and in the case of AI, when it was shown how ethics could be used in a symbolic way to legitimate an ongoing business. The AI case also illustrated the risk of over-reliance of ethics expertise when it could be expected that all relevant ethical implications are exhausted, although important aspects have been omitted or insufficiently considered. In both cases, the fact that it was not always ethics experts who were providing the ethics could lead to under-reliance as well as over-reliance of ethics expertise.
Ethicisation in the context of emerging technologies directs attention to ethical approaches that are being developed and some kind of cultural consensus of what is desirable in the making. A democratically problematic aspect highlighted in the analysis is the risk that ethical guidelines give the impression that the issue is exhausted, what I call over-reliance on ethics expertise. But perhaps it is to expect too much that the first ethics guidelines on a new technology should cover all thinkable issues. Provided that they are not the final word, ethics guidelines could potentially open up for public deliberation. However, the power of defining the problem cannot be disregarded. Ethicisation as it is practised in the cases analysed here gives this advantage to experts (although, as we have seen, not always to experts on ethics).
While I do not intend to devalue ethics as an important contribution to policy-making on emerging technologies, there is a risk that the increasing turn to ethics blocks the view for societal, environmental and other important aspects, and especially so when the focus is on making the technology ethical, as seems to be the case for AI. This focus may give the impression that society should adapt to the technology, rather than asking what society needs, and develop technology for those needs. Certainly, the symbolic use of ethics that this study has pointed to and that may have under-reliance of ethics expertise as an effect is discouraging, but to end on a positive note, a more optimistic interpretation from the perspective of ethics expertise is that the very reference to ethics is a sign that ethics is deemed important.
Notes
Ethicisation in the meaning applied to here is sometimes referred to as ethisation or ethicalisation. In this article, I consistently use ‘ethicisation’.
Algorithm Watch’ global inventory of AI ethics guidelines, last updated in April 2020, includes 173 guidelines (Algorithm Watch). See also footnote 9.
For instance, ethical guidelines may include statements of the requirement to follow laws (e.g., New York Association of Towns). While it certainly is unethical not to follow laws, it is also illegal. Thus, pointing out in ethical guidelines that laws should be abided by does not really add anything as regards expected behaviour, and it could be argued that turning to ethics in such cases is superfluous.
It is important to point out that this is not to say that moral philosophers should not be normative in their scholarly work as moral philosophers (cf. Dowding 2020). The discussion in this article is about the role of moral philosophers and other experts on ethics in political processes on emerging technologies.
Although global C02 emissions declined by 5.8% in 2020 due to the Corona pandemic, in 2021 the graph went upwards again and the emissions are estimated to rebound to the levels before the pandemic (IEA 2021).
A related worry is that experts are appointed in accordance with their particular stance on a controversy (Friele 2003), which would be even more detrimental for a democratic process.
In the recent edition of their text book Artificial Intelligence: A Modern Approach, Russell and Norvig (2021) reject the comparison with human thinking, and describe AI in terms of rationality without specifying how rationality is enacted.
In the context of AI and robotics, autonomy refers to a machine’s ability to perceive the environment, learn from experience and act independently of an external operator, even when a human has decided what the machine should do. This differs from a philosophical, conceptual understanding of autonomy, which stresses the capacity to act independently without some external power and to make one’s own choices (Haselager 2005, pp. 518–519).
The members of the first mandate of GAEIB were Baroness Mary Warnock, a distinguished moral philosopher, Hans F. Zacher, then president of the Max Planck Institute for Patent Law, Noëlle Lenoir, member of the Constitutional Council in France, Margareta Mikkelsen, medical geneticist, Marcelino Oreja, lawyer, and Marcello Siniscalco, professor of genetics (Jasanoff 2005, p. 315).
At national or regional level, we could expect a considerably larger number of AI ethics guidelines. In 2019 there were 128 such guidelines in Europe alone (Crawford 2021, pp. 223–224).
The fact that one of the ethicists in the AI HLEG expert group criticised their guidelines as ‘a case of ethical white-washing’ (Metzinger 2019) suggests that this is not an improbable interpretation.
Among the 52 members, there were 23 members from the AI industry, nine academics with a PhD degree within AI development and eight academics from the social sciences or the humanities, of which three were scholarly ethicists (one professor of philosophy, one professor of philosophy and ethics, and one professor of theoretical philosophy) (European AI Alliance, 2022).
References
Algorithm watch. AI ethics guidelines global inventory. https://inventory.algorithmwatch.org.
Andersson, Staffan, and Helena Ekelund 2021. Promoting ethics management strategies in the public sector: Rules, values, and inclusion in Sweden. Administration & Society. https://doi.org/10.1177/00953997211050306.
AI HLEG. 2019. Ethics guidelines for trustworthy AI. Brussels: European Commission: High-Level Expert Group on AI.
Barbehön, Marlon, Sybille Münch, and Wolfram Lamping. 2015. Problem definition and agenda-setting in critical perspective. In Handbook of critical policy studies, eds. Frank Fischer, Douglas Torgerson, Anna Durnová and Michael Orsini, 241–258.
Bogner, Alexander. 2009. Ethisierung und die Marginalisierung der Ethik: Zur Mikropolitik des Wissens in Ethikräten. Soziale Welt 60 (2): 119–137.
Bogner, Alexander. 2010. Let’s disagree! Talking ethics in technology controversies. Science Technology & Innovation Studies 6 (2): 183–201.
Boswell, Christina. 2009. The political uses of expert knowledge: Immigration policy and social research. Cambridge University Press.
Bowen, W. Richard. 2014. Engineering ethics: Challenges and opportunities. Springer.
Brock, Dan W. 2006. Truth or consequences: The role of philosophers in policy-making. In Bioethics: An anthology, 2nd edn. Eds. Helga Kuhse and Peter Singer, 715–718. Maiden, MA: Blackwell.
Busby, Helen, Tamara Harvey and Alison Mohr. 2008. Ethical EU law? The influence of European Group on ethics in science and new technologies. European Law Review 33 (6): 803–842.
Carmody, Jullian, Samir Shringarpure and Gerhard van de Venter 2021. AI and privacy concerns: A smart meter case study. Journal of Information Communication and Ethics in Society 19 (4): 492–505.
Carroll, Joanne. 2015. Developing a code of ethics for the digital repository of Ireland. New Review of Information Networking 20: 48–52.
Cavaggion, Giovanni. 2019. Ethicization of constitutional public order in the European multicultural state. Oxford Journal of Law and Religion 8: 493–516.
CEC. 1991. Commission of the European Communities. Promoting the competitive environment for the industrial activities based on biotechnology within the community. Commission Communication to the Parliament and the Council. Section (91)629 final.
Crawford, Kate. 2021. Atlas of AI. Yale University Press.
Dancy, Jonathan. 2011. Moral realism. Routledge encyclopedia of philosophy. Taylor and Francis. http://www.rep.routledge.com/articles/thematic/moral-realism/v-2. https://doi.org/10.4324/9780415249126-L059-2.
De Fine Licht, Karl & Bengt Brülde. 2021. On defining ‘reliance’ and ‘trust’: Purposes, conditions of adequacy, and new definitions. Philosophia 49: 1981–2001.
Douglas, Heather. 2009. Science, policy, and the value-free ideal. University of Pittsburgh Press.
Dowding, Keith. 2020. The relationship between political philosophy and political science. Australian Journal of Political Science 55 (4): 432–444.
Dzur, Albert W. 2008. Democratic professionalism: Citizen participation and the reconstruction of professional ethics, identity, and practice. Pennsylvania: Pennsylvania State University Press.
EC. 1997. Commission Decision Sect. (97) 2404 of 16 December 1997. https://ec.europa.eu/transparency/expert-groups-register/screen/expert-groups/consult?do=groupDetail.groupDetail&groupID=1566.
EC. 2018. Call for applications for the selection of members of the high-level expert group on artificial intelligence. Brussels: European Commission. https://digital-strategy.ec.europa.eu/en/news/call-high-level-expert-group-artificial-intelligence
EC. 2019. Artificial intelligence: Commission takes forward its work on ethics guidelines. European Commission. Press release, April 8. https://ec.europa.eu/commission/presscorner/detail/en/ip_19_1893.
EC. 2021. Proposal for a regulation on artificial intelligence – A European approach. European Commission COM(2021) 206.
EGE. 2018. Statement on artificial intelligence, robotics and ‘autonomous’ systems. European Group on Ethics and in Science and New Technologies.
Eubanks, Virginia. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. Macmillan.
European AI Alliance. 2022. AI HLEG – steering group of the European AI Alliance. https://ec.europa.eu/futurium/en/european-ai-alliance/ai-hleg-steering-group-european-ai-alliance.html, latest accessed April 8, 2022.
Evans, John H. 2002. Playing God? Human genetic engineering and the rationalization of public bioethical debate. University of Chicago Press.
Evans, John H. 2006. Between technocracy and democratic legitimation: A proposed compromise position for common morality public bioethics. Journal of Medicine and Philosophy 31 (3): 213–234.
Evans, John H. 2012. The history and future of bioethics: A sociological view. New York: Oxford University Press.
Falkowski, Tomasz and Helena Ostrowicka. 2021. Ethicalisation of higher education reform: The strategic integration of academic discourse on scholarly ethics. Educational Philosophy and Theory 53 (5): 479–491.
Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Christopher Nagy, and Madhulika Srikumar (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Birkman Klein Center 2020-1.
Fox, Ellen, Sarah Myers, and Robert A. Pearlman. 2007. Ethics consultation in United States hospitals: A national survey. The American Journal of Bioethics 7 (2): 13–25.
Friele, Minou Bernadette. 2003. Do committees ru(i)n the bio-political culture? On the democratic legitimacy of bioethics committees. Bioethics 17 (4): 301–318.
Fukuda-Parr, Sakiko, and Elizabeth Gibbons. 2021. Emerging consensus on ‘ethical AI’: Human rights critique of stakeholder guidelines. Global Policy 12 (S6): 32–44.
Gent, Edd. 2015. AI: Fears of ‘playing God’. Engineering & Technology 10 (2): 76–79.
Greene, Daniel, Anna Lauren Hoffman, and Luke Stark. 2019. Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaiian international conference on system sciences (HICSS-52), 2122–2131.
Grunwald, Armin. 2003. Technology assessment at the German Bundestag: ‘Expertising’ democracy for ‘democratising’ expertise. Science and Public Policy 30 (3): 193–198.
Gunkel, David J. 2017. Mind the gap: Responsible robots and the problem of responsibility. Ethics and Information Technology 22: 307–320.
Hagendorff, Thilo. 2020. The ethics of AI ethics: An evaluation of guidelines. Mind & Machines 30: 99–120.
Hagendorff, Thilo. 2021. Blind spots in AI ethics. AI and Ethics. https://doi.org/10.1007/s43681-021-00122-8.
Haselager, Willem F. G. 2005. Robotics, philosophy and the problems of autonomy. Pragmatics & Cognition 13 (3): 515–532.
Hedlund, Maria. 2014. Ethics expertise in the regulation of biomedicine: The need of democratic justification. Critical Policy Studies 8 (3): 282–299.
Hedlund, Maria. 2022. Distribution of forward-looking responsibility in the EU process on AI regulation. Frontiers in Human Dynamics: Digital Impacts Online first. https://doi.org/10.3389/fhumd.2022.703510.
Houweling, Elles. 2021. China’s new AI rulebook: Humans must remain in control, Verdict, October 4th. https://www.verdict.co.uk/china-ai-rulebook/.
IEA. 2021. Global energy review 2021, International Energy Agency, Paris. https://www.iea.org/reports/global-energy-review-2021.
Illinois Institute of Technology. Codes of ethics collection. http://ethics.iit.edu/ecodes/.
Jasanoff, Sheila. 2005. Designs on nature: Science and democracy in Europe and in the United States. New Jersey & Woodstock: Princeton University Press.
Jobin, Anna, Marcello Ienca and Effy Vayena. 2019. The global landscape of AI ethics guidelines, Nature Machine Intelligence 1: 389–399.
Johnson, Gabbrielle M. 2021. Algorithmic bias: On the implicit biases of social technology. Synthese 198 (10): 9941–9961.
Juric, Mislav, Agneza Sandic and Brcic Mario. 2020. AI safety: State of the field through quantitative lens, 43rd International convention on information, communication, and electronic technology (MIPRO). 1254–1259.
Kumarasinghe, Sriyalatha, Indujeeva Keerthilal Peiris, and André M. Everett. 2021. Ethics disclosure as strategy: A longitudinal case study. Meditari Accountancy Research 29 (2): 294–323.
Laukyte, Migle. 2017. Articial agents among us: Should we recognize them as agents proper? Ethics of Information Technology 19: 1–17.
Lima, Natacha Salomé, and A. Gustavo Martínez. 2021. Biotechnological challenges: The scope of genome editing. JBRA Assisted Reproduction 25 (1): 150–154.
Littoz-Monnet, Annabelle. 2015. Ethics experts as an instrument of technocratic governance: Evidence from EU medical biotechnology policy. Governance: An International Journal of Policy Administration and Institutions 28 (3): 357–372.
Littoz-Monnet, Annabelle. 2021. Governing through expertise: The politics of bioethics. Cambridge University Press.
Metzinger, Thomas. 2019. EU guidelines: Ethics washing made in Europe. In Der Tagespiegel, April 8.
Mohr, Alison, Helen Busby, Tamara Hervey and Robert Dingwall. 2012. Mapping the role of official bioethics advice in the governance of biotechnologies in the EU: The European Group on ethics’ opinion on commercial cord blood banking. Science and Public Policy 39: 105–117.
Moreno, Jonathan D. 2006. Ethics consultation as moral engagement. In Bioethics: An anthology, eds. Helga Kuhse and Peter Singer, 2nd edn, 707–714. Maiden, MA: Blackwell.
New York Association of Towns. Sample code of ethics for municipalities. https://www.nytowns.org/images/Documents/Announcement/sample%20code%20of%20ethics%20for%20municipalities.pdf.
Nussbaum, Martha. 2002. Moral expertise? Constitutional narratives and philosophical argument. Metaphilosophy 33 (5): 502–520.
Ochigame, Rodrigo. 2019. The invention of ‘ethical AI’: How big tech manipulates academia to avoid regulation, The Intercept, December 20. https://theintercept.com/%202019/12/20/mit-ethical-ai-artificial-intelligence/?com%20ments=1.
O’Neill, Cathy. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books.
Parker, Lisa S. 2005. Ethical expertise, maternal thinking, and the work of clinical ethicists. In Ethics expertise: History, contemporary perspectives, and applications, ed. Lisa Rasmussen, 165–207. Dordrecht: Springer.
Persson, Erik, and Maria Hedlund. 2021. The future of AI in our hands? To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction? AI and Ethics. https://doi.org/10.1007/s43681-021-00125-5.
Petersen, Alan. 2011. The politics of bioethics. Routledge.
Plomer, Aurora. 2006. Stem cell patents: European patent law and ethics report. FP6 ‘Life sciences, genomics and biotechnology for health’ SSA LSSB-CT-2004-005251.
Rességuir, Anaïs and Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society 7 (2): 1–5.
Russell, Stuart, and Peter Norvig. 2010. Artificial intelligence: A modern approach, 3rd edn. Pearson Education.
Sanchini, Virginia. 2015. Bioethical expertise: Mapping the field. Biblioteca della libertà. https://www.centroeinaudi.it/images/abook_file/213_online_Sanchini.pdf
Scofield, Giles R. 2018. What—if anything—sets the limits to the clinical ethics consultant’s expertise? Perspectives in Biology and Medicine 61 (4): 594–608.
Show, Pau, Chiaki Ogino Loke, and Mohamad Faizal Ibrahim (eds.). 2021. Biotechnology for sustainability and social well being. MDPI Books.
Sinnott-Armstrong, Walter. 2019. Moral skepticism. In The Stanford encyclopedia of philosophy. ed. Edward N. Zalta. https://plato.stanford.edu/archives/sum2019/entries/skepticism-moral.
Sossin, Lorne, and Charles W. Smith. 2003. Hard choices and soft law: Ethical codes, policy guidelines and the role of the courts in regulating government. Alberta Law Review 40: 867–893.
Tallacchini, Mariachiara. 2015. To bind or not bind? European ethics as soft law. In Science and democracy: Making knowledge and making power in the biosciences and beyond, 156–175. New York: Routledge.
Tiesenkopfs, Talis, Emils Kilis, Mikelis Grivins, and Anda Adamsone-Fiskovica. 2019. Whose ethics and for whom? Dealing with ethical disputes in agri-food governance. Agriculture and Human Values 36(2): 353–364.
Tong, Rosemarie. 1991. The epistemology and ethics of consensus: Uses and misuses of ‘ethical’ expertise. The Journal of Medicine and Philosophy 16: 409–426.
Trotter, Griffin. 2002. Moral consensus in bioethics: Illusive or just elusive? Cambridge Quarterly of Healthcare Ethics 11: 1–3.
Turner, Stephen P. 2003. Liberal democracy 3.0. Civil society in an age of experts. London: Sage.
UNESCO. 2021. Recommendations of the ethics of artificial intelligence. The United Nations’ Educational, Scientific and Cultural Organisation. https://en.unesco.org/artificial-intelligence/ethics#recommendation.
Urbinati, Nadia. 2014. Democracy disfigured. Harvard University Press.
Wahlberg, Ayo, Dong Dong, Priscilla Song, and Jianfeng Zhu. 2021. The platforming of human embryo editing: Prospecting ‘disease free’ futures. New Genetics and Society 40 (4): 367–383.
Waldron, Jeremy. 1999. Law and disagreement. Oxford: Clarendon.
Wildavsky, Aaron. 1979. Speaking truth to power: The art and craft of policy analysis. Boston, MA: Little Brown & Co.
Yoder, S. D. 1998. The nature of ethical expertise. The Hastings Center Report 28 (6): 11–19.
Zuboff, Shoshana. 2019. The age of surveillance capitalism: The fight for the future at the the fight for the future at the new frontier of power. London: Profile Books.
Funding
Open access funding provided by Lund University.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hedlund, M. Ethicisation and Reliance on Ethics Expertise. Res Publica 30, 87–105 (2024). https://doi.org/10.1007/s11158-023-09592-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11158-023-09592-5