Keywords

We now have a detailed, empirically informed and, I hope, conceptually interesting view of the ethics of AI. This leads to the question: what can we do about it? This chapter gives an overview of possible answers currently being discussed in the academic and policy discourses. For ease of reading, it breaks down the options into policy level, organisational level, guidance mechanisms and supporting activities. For each of these categories key mitigation measures will be introduced and key open issues and questions highlighted.

5.1 Options at the Policy Level

Activities at policy level are undertaken by political decision-makers. These can be located at the national level, but also at the regional and/or international level. Due to the nature of AI as an international and cross-boundary technology, particular attention will be paid to international policy initiatives coming from international bodies such as the UN, the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the OECD. And, as a European writing a book based to a large extent on European research, I focus my attention mainly on European policy.

5.1.1 Policy Aims and Initiatives

The number of policy papers on AI is significant. Jobin et al. (2019) have provided a very good overview, but it is no longer comprehensive, as the publication of policy papers continues unabated. Several individuals and groups have set up websites, databases, observatories or other types of resources to track this development. Some of the earlier ones seem to have been one-off overviews that are no longer maintained, such as the websites by Tim Dutton (2018), Charlotte Stix (n.d.) and NESTA (n.d). Others remain up to date, such as the website run by AlgorithmWatch (n.d.), or have only recently come online, such as the websites by Ai4EU (n.d.), the EU’s Joint Research Centre (European Commission n.d.) and the OECD (n.d.).

What most of these policy initiatives seem to have in common is that they aim to promote the development and use of AI, while paying attention to social, ethical and human rights concerns, often using the term “trustworthy AI” to indicate attention to these issues. A good example of high-level policy aims that are meant to guide further policy development is provided by the OECD (2019). It recommends to its member states that they develop policies for the following five aims:

  • investing in AI research and development

  • fostering a digital ecosystem for AI

  • shaping an enabling policy environment for AI

  • building human capacity

  • preparing for labour market transformation

  • international co-operation for trustworthy AI

Policy initiatives aimed at following these recommendations can cover a broad range of areas, most of which have relevance to ethical issues . They can address questions of access to data, distribution of costs and benefits through taxation or other means, environmental sustainability and green IT, to give some prominent examples.

These policy initiatives can be aspirational or more tangible. In order for them to make a practical difference, they need to be translated into legislation and regulation, as will be discussed in the following section.

5.1.2 Legislation and Regulation

At the time of writing this text (European summer 2020), there is much activity in Europe directed towards developing appropriate EU-level legislation and regulation of AI. The European Commission has launched several policy papers and proposals (e.g. European Commission 2020c, d), notably including a White Paper on AI (European Commission 2020a). The European Parliament has shared some counterproposals (European Parliament 2020a, b) and the political process is expected to lead to legislative action in 2021.

Using the categories developed in this book, the question is whether – for the purpose of the legislation – AI research , development and use will be framed in terms of human flourishing, efficiency or control. The EC’s White Paper (European Commission 2020a) is an interesting example to use when studying the relationship between these different purposes. To understand this relationship, it is important to see that the EC uses the term “trust” to represent ethical and social aspects, following the High-Level Expert Group on AI (2019). This suggests that the role of ethics is to allow people to trust a technology that has been pre-ordained or whose arrival is inevitable. In fact, the initial sentences in the introduction to the White Paper state exactly that: “As digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is also a prerequisite for its uptake ” (European Commission 2020a: 1). Ethical aspects of AI are typically discussed by European bodies using the terminology of trust. The document overall often follows this narrative and the focus is on the economic advantages of AI, including the improvement of the EU’s competitive position in the perceived international AI race.

However, there are other parts of the document that focus more on the human flourishing aspect: AI systems are described as having a “significant role in achieving the Sustainable Development Goals ” (European Commission 2020a: 2), environmental sustainability and ethical objectives. It is not surprising that a high-level policy initiative like the EC’s White Paper combines different policy objectives. What is nevertheless interesting to note is that the White Paper contains two main areas of policy objectives: excellence and trust. In Section 4, entitled “An ecosystem of excellence”, the paper lays out policies to strengthen the scientific and technical bases of European AI, covering European collaboration, research, skills, work with SMEs and the private sector, and infrastructure. Section 5, the second main part of the White Paper, under “An ecosystem of trust”, focuses on risks, potential harms, liability and similar regulatory aspects. This structure of the White Paper can be read to suggest that excellence and trust are fundamentally separate, and that technical AI development is paramount, requiring ethics and regulation to follow.

When looking at the suitability of legislation and regulation to address ethical issues of AI, one can ask whether and to what degree these issues are already covered by existing legislation. In many cases the question thus is whether legislation is fit for purpose or whether it needs to be amended in light of technical developments. Examples of bodies of law with clear relevance to some of the ethical issues are intellectual property law, data protection law and competition law.

One area of law that is likely to be relevant and has already led to much high-level debate is that of liability law. Liability law is used to deal with risks and damage sustained from using (consumer) products, whether derived from new technologies or not. Liability law is likely to play a key role in distributing risks and benefits of AI (Garden et al. 2019). This explains the various EU-level initiatives (Expert Group on Liability and New Technologies 2019; European Commission 2020b; European Parliament 2020b) that try to establish who is liable for which aspects of AI. Relatedly, the allocation of strict and tort liabilities will set the scene for the greater AI environment, including insurance and litigation.

Another body of existing legislation and regulation being promoted to address the ethical issues of AI is that of human rights legislation. It has already been highlighted that many of the ethical issues of AI are simultaneously human rights issues, such as privacy and discrimination . Several contributors to the debate therefore suggest that existing human rights regulation may be well suited to addressing AI ethics issues. Proposals to this effect can focus on particular technologies, such as machine learning (Access Now Policy Team 2018), or on particular application areas, such as health (Committee on Bioethics 2019), or broadly propose the application of human rights principles to the entire field of AI (Latonero 2018, Commissioner for Human Rights 2019, WEF 2019).

The discussion of liability principles at EU level is a good example of the more specific regulatory options that are being explored. During a recent review of regulatory options for the legislative governance of AI, in particular at the European level, Rodrigues et al. (2020) surveyed the current legislative landscape and identified the following proposals that are under active discussion:

  • the adoption of common EU definitions

  • algorithmic impact assessments under the General Data Protection Regulation (GDPR)

  • creating electronic personhood status for autonomous systems

  • the establishment of a comprehensive EU system of registration of advanced robots

  • an EU task force of field-specific regulators for AI/big data

  • an EU-level special list of robot rights

  • a general fund for all smart autonomous robots

  • mandatory consumer protection impact assessment

  • regulatory sandboxes

  • three-level obligatory impact assessments for new technologies

  • the use of anti-trust regulations to break up big tech and appoint regulators

  • voluntary/mandatory certification of algorithmic decision systems

Using a pre-defined evaluation strategy, all of these proposals were evaluated. The overall evaluation suggested that many of these options were broad in scope and lacked specific requirements (Rodrigues et al. 2020). They over-focused on well-established issues like bias and discrimination but neglected other human rights concerns, and resource constraints would arise from resource-intensive activities such as the creation of regulatory agencies and the mandating of impact assessments.

Without going into more detail than appropriate for a Springer Brief, what seems clear is that legislation and regulation will play a crucial role in finding ways to ensure that AI promotes human flourishing. A recent review of the media discourse of AI (Ouchchy et al. 2020) shows that regulation is a key topic, even though it is by no means agreed whether and which regulation is desirable.

There is, however, one regulatory option currently being hotly debated that has the potential to significantly affect the future shape of technology use in society, and which I therefore discuss separately in the next section

5.1.3 AI Regulator

The creation of a regulator for AI is one of the regulatory options. It only makes sense to have one if there is something to regulate, i.e. if there is regulation that needs to be overseen and enforced. In light of the multitude of regulatory options outlined in the previous section, one can ask whether there is a need for a specific regulator for AI , given that it is unclear what the regulation will be.

It is again instructive to look at the current EU discussion. The EC’s White Paper (European Commission 2020a) treads very carefully in this respect and discusses under the heading of “Governance ” a network of national authorities as well as sectoral networks of regulatory authorities. It furthermore proposes that a committee of experts could provide assistance to the EC. This shows a reluctance to create a new institution. The European Parliament’s counterproposal (2020a) takes a much stronger position. It renews an earlier call for the designation of a “European Agency for Artificial Intelligence ”. Article 14 of the proposed regulation suggests the creation of a supervisory authority in each European member state (see, e.g., Datenethikkommission 2019) that would be responsible for enforcing ways of dealing with ethical issues of AI. These national supervisory authorities will have to collaborate closely with one another and with the European Commission , according to the proposal from the European Parliament.

A network of regulators, or even the creation of an entire new set of regulatory bodies, will likely encounter significant opposition. One key matter that needs to be addressed is the exact remit of the regulator. A possible source of confusion is indicated in the titles of the respective policy proposals. Where the EC speaks only of artificial intelligence , the European Parliament speaks of AI, robotics and related technologies. The lack of a clear definition of AI is likely to create problems

A second concern relates to the distribution of existing and potential future responsibilities. The question of the relationship between AI supervisory authorities and existing sectoral regulators is not clear. If, for example, a machine learning system used in the financial sector were to raise concerns about bias and discrimination , it is not clear whether the financial regulator or the AI regulator would be responsible for dealing with the issue.

While the question of creating a regulator or some other governance structure capable of taking on the tasks of a regulator remains open, it is evident that it might be a useful support mechanism to ensure that potential regulation could be enforced. In fact, the possibility of enforcement is one of the main reasons for calls for regulation. It has frequently been remarked that talk of ethics may be nothing but an attempt to keep regulation at bay and thus render any intervention impotent (Nemitz 2018, Hagendorff 2019, Coeckelbergh 2019). It is by no means clear, however, that legislative processes will deliver the mechanisms to successfully address the ethics of AI (Clarke 2019a). It is therefore useful to understand other categories of mitigation measures, and that is why I now turn to the proposals that have been directed at organisations.

5.2 Options at the Organisational Level

Organisations, whether public or private, whether profit-oriented or not, play a central role in the development and deployment of AI. Many of the decisions that influence ethical outcomes are made by organisations. Organisations will also reap many of the benefits of AI, most notably the financial benefits of developing or deploying AI. They are therefore intimately involved in the AI ethics discourse. In this section I distinguish between industry commitments, organisational governance and strategic initiatives that organisations of different types can pursue to address ethical issues of AI.

5.2.1 Industry Commitments

To achieve ethical goals in industry, it is often useful for organisations to join forces, for instance in the formulation of ethical aims (Leisinger 2003). While organisations do not necessarily share goals, benefits or burdens, there are certainly groups of organisations that do have common interests and positions. One action that such organisations can pursue is forming associations to formulate their views and feed them into the broader societal discourse. The most prominent example of such an association of organisations is the Partnership on AI,Footnote 1 which includes the internet giants – Google, Apple, Facebook, Amazon, Microsoft – as well as a host of academic and civil society organisations. Other associations such as the Big Data Value AssociationFootnote 2 focus on specific issues or areas, such as big data in Europe.

Industry associations might not have the trust of the public when they represent industrial interests, where these are seen to be in opposition to the broader public good. For instance, the comprehensive Edelman Trust Barometer (Edelman 2020: 23) found that 54% of those surveyed believed that businesses were unfair in that they only catered for the interests of the few rather than serving everybody equally and fairly.

For instance, it seems reasonable to assume that one of the main purposes of the Partnership on AI is to lobby political decision-makers in ways that are conducive to the companies. At the same time, it is reassuring, and maybe a testament to the high visibility of AI ethics , that the pronouncements of the Partnership on AI emphasise ethical issues more heavily than most governmental positions. Most of the members of the Partnership on AI are not-for-profit entities, and its statements very clearly position the purpose of AI in what I have described as AI for human flourishing. The Partnership on AI has a set of tenets published on its website which starts by saying:

We believe that artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education. (Partnership on AI n.d.)

Cynics might argue that this is an example of ethics washing (Wagner 2018) with the main purpose of avoiding regulation (Nemitz 2018). However, while there may well be some truth in the charge of ethics washing, the big internet companies are publicly and collectively committing themselves to these noble goals. It is also important to see this in the context of other corporate activities, such as Google’s long-standing commitment not to be evil,Footnote 3 Facebook’s decision to institute an ethics review process (Hoffman 2016) and Microsoft’s approach to responsible AI (Microsoft n.d.). Each of these companies has individually been criticised on ethical grounds and will likely continue to be criticised, but for the account of AI ethics in this book it is worth noting that they, at least, as some of the most visible developers and deployers of AI, use a rhetoric that is fully aligned with AI for human flourishing.

This opens up the question: what can companies do if they want to make a serious commitment to ethical AI? I will look at some of the options in the following section on organisational governance .

5.2.2 Organisational Governance

Under this heading I discuss a range of activities undertaken within organisations that can help them deal with various ethical or human rights aspects of AI. Most of these are well established and, in many cases, formalised, often under legal regimes. An example of such an existing governance approach is the corporate governance of information technology which organisations can institute following existing standards (ISO 2008). The ethical issues of AI related to large datasets can, at least to some degree, be addressed through appropriate data governance . Organisational data governance is not necessarily concerned with ethical questions (Tallon 2013, British Academy and Royal Society 2017), but it almost invariably touches on questions of ethical relevance. This is more obvious in some areas than in others. In the health field, for example, where the sensitivity of patient and health data is universally acknowledged, data management and data governance are explicitly seen as ways of ensuring ethical goals (Rosenbaum 2010, OECD 2017). The proximity of ethical concerns and data governance has also led to the development of data governance approaches that are explicitly developed around ethical premises (Fothergill et al. 2019).

Data protection is part of the wider data governance field, the part that focuses on the protection of personal data. Data protection is a legal requirement in most jurisdictions. In the EU, where the GDPR governs data protection activities, data protection is relatively clearly structured, and organisations are aware of their responsibilities. The GDPR has brought in some new practices, such as the appointment of a data protection officer for organisations, a post whose obligation to promote data protection can take precedence over obligations towards the employer. Data protection in Europe is enforced by data protection authorities: in the UK, for example, by the Information Commissioner’s Office (ICO). The national authorities are supported by the European Data Protection Board, which promotes cooperation and consistent application of the law. The European Data Protection Supervisor is an independent EU-level supervisory authority that is part of the board. This is an example of a multi-level governance structure that defines clear responsibilities at the organisational level but extends from the individual employee to national and international regulatory activities. I will come back to this in the recommendations chapter as an example of a type of governance structure that may be appropriate for AI more broadly.

Breaches of data protection are, for many companies, a risk that needs to be managed. Similarly, the broader topic of AI ethics may entail risks, not just in terms of reputation, but also of liability , that organisations can try to address through existing or novel risk management processes. Given that risk assessment and management are well-established processes in most organisations, they may well provide the place to address AI ethics concerns. Clarke (2019b) therefore proposes a focus on these processes in order to establish responsible AI.

A downside of the organisational risk management approach to the ethics of AI is that it focuses on risks to the organisation, not risks to society. For broader societal issues to be addressed, the organisational risk management focus needs to broaden beyond organisational boundaries. As Clarke (2019b) rightly states, this requires the organisation to adopt responsible approaches to AI, which need to be embedded in a supportive organisational culture and business purposes that strengthen the motivation to achieve ethically desirable outcomes.

The EU’s AI ethics debate seems to lean heavily towards a risk-based approach. This is perfectly reasonable, in that many AI applications will be harmless and any attempt to regulate them would not only be likely to fail, but also be entirely superfluous. However, there are some AI applications that are high-risk and in need of close scrutiny, and it may be impossible to allow some of those to go forward due to ethical considerations (Krafft et al 2020). This raises the question of whose responsibility it is to assess and manage any risks. The European Parliament (2020b) has suggested that the deployer of an AI system is in control of any risks. The level of risk should determine the liability regime under which damage is dealt with. For the deployer to have clarity on the risk level, the European Parliament has suggested that the EC should hold a list of high-risk AI-systems that require special scrutiny and would be subject to a strict liability regime. In the annex to its draft regulation, the European Parliament lists the following technologies: unmanned aircraft, autonomous vehicles (automation level 4 and 5), autonomous traffic management systems, autonomous robots and autonomous public places cleaning devices (Fig. 5.1).

Fig. 5.1
figure 1

High-risk AI systems according to the European Parliament (2020b)

This approach of focusing on high-risk areas has the advantage of legal clarity for the organisations involved. Its weakness is that it makes assumptions about the risk level that may be difficult to uphold. If risk is determined by the “severity of possible harm or damage, the likelihood that the risk materializes and the manner in which the AI-system is being used” (European Parliament 2020b: art. 3(c)), then it is difficult to see how an abstract list like the one illustrated in Figure 5.1 can determine the risk level. The examples given in the draft regulation are all of systems that carry a risk of physical injury, which is understandable in the context of a liability regulation that is strongly influenced by liability for existing systems, notably from the automotive sector. It is not clear, however, how one would compare the risk of, say, being hit by a falling drone with the risk of being wrongly accused of a crime or the risk of political manipulation of a democratic election.

A risk-based approach nevertheless seems likely to prevail, and there are good reasons for this. The German Datenethikkommission (2019) has proposed a regulation system that may serve as a good example of the risk-based approach. The process of allocating AI systems to these (or similar) risk schemes will be key to the success of a risk-based approach to AI ethics at a societal level, which is a condition for organisations to successfully implement it.

Risk management needs to be based on an understanding of risks, and one aspect of risks is the possible consequences or impacts on society. It is therefore important for organisations aiming to address the ethics of AI proactively to undertake appropriate impact assessments. Some types of impact assessment are already well established, and many organisations are familiar with them. Data protection impact assessments, a development of privacy impact assessments (Clarke 2009, ICO 2009, CNIL 2015), for example, form part of the data protection regime established by the GDPR and are thus implemented widely. Other types of impact assessment cover the environmental impact (Hartley and Wood 2005), the social impact (Becker 2001, Becker and Vanclay 2003), the ethical impact (Wright 2011, CEN-CENELEC 2017) and any impact on human rights (Latonero 2018).

Overall, the various measures that contribute to good organisational governance of AI constitute an important part of good practice that organisations can adopt in order to reduce risks from AI. They may desire to take these measures because they want to do the right thing, but a diligent adoption of good practice can also serve as a defence against liability claims if something goes wrong. This points to the last aspect of organisational responses I want to discuss here, the strategic commitments of an organisation.

5.2.3 Strategic Initiatives

Many companies realise that their responsibilities are wide-ranging and therefore include a commitment to ethical principles and practices in their strategic thinking. This can be done in many ways. The most common term used to denote an organisation’s commitment to the greater good is “corporate social responsibility ” (CSR) (Garriga and Melé 2004, Blue & Green Tomorrow 2013, Janssen et al. 2015). Libraries have been written about CSR. For the purposes of this book, it suffices to say that CSR is well established as a concept recognised by organisations that may well serve as a starting point for discussing ethical aspects of AI.

One activity often fostered by CSR and arguably of central importance to ensuring adequate coverage of ethical issues in organisations is stakeholder engagement . Stakeholders, following Freeman and Reed (1983), are individuals or groups who are significantly affected by an action or potentially at risk, who thus have a “stake” in it (Donaldson and Dunfee 1999). The term “stakeholder” was coined by Freeman and his collaborators (Freeman and Reed 1983) as a counterpoint to the exclusive focus on shareholders. Stakeholder engagement is now well recognised as a way for organisations to better understand their environment (O’Riordan and Fairbrass 2014). In the ICT world, of which AI forms a part, there can be an affinity between stakeholder engagement and user engagement (Siponen and Vartiainen 2002). Users are understood to be those people who will benefit from a company’s products or services, once launched, while stakeholders are those who may experience an impact from the company’s work, whether they are users of the product or service or not.

Stakeholder engagement can cover a broad range of activities, and there is little agreement on which methods should be employed to ensure ethically acceptable outcomes. A further and more structured way for organisations to flag their strategic desire to take ethical issues seriously, which may include stakeholder engagement but goes beyond it, is the integration of human rights into organisational strategy and practices.

As a number of the most prominent AI ethics issues are also human rights issues (privacy , equality and non-discrimination), there have been calls for governments, and also private-sector actors, to promote human rights when creating and deploying AI (Access Now Policy Team 2018). The exact nature of the relationship between ethics and human rights is up for debate. While they are not identical, they are at least synergistic (WEF 2018).

Fortunately, the question of the integration of human rights into organisational processes is not entirely new. The UN developed guiding principles for business and human rights that provide help in implementing the UN “protect, respect and remedy” framework (United Nations 2011). While these are generic and do not specifically focus on AI, there are other activities that develop the thinking about AI and human rights further. The Council of Europe has developed principles for the protection of human rights in AI (Commissioner for Human Rights 2019) and more detailed guidance tailored for businesses has been developed by BSR (Allison-Hope and Hodge 2018).

The preceding sections have shown that there are numerous options for organisations to pursue, if they want to address ethical issues of AI. A key question is whether organisations actually realise and implement these options.

5.2.4 Empirical Insights into AI Ethics in Organisations

As part of the empirical case studies undertaken to understand the social reality of AI ethics (see Macnish et al. 2019), respondents were asked about their organisational responses to these issues. It is worth highlighting, however, that from the case studies it became clear that organisations are highly sensitive to some issues, notably those specific issues related to machine learning that are prominently discussed in the media, such as bias, discrimination , privacy and data protection , or data security. Figure 5.2 summarises and categorises the strategies pursued by the organisations investigated.

Fig. 5.2
figure 2

How case study organisations address ethical issues of AI: empirical findings

The organisations researched spent significant efforts on awareness raising and reflection, for example through stakeholder engagement , setting up ethics boards and working with standards, and they explicitly considered dilemmas and questions on how costs and benefits could be balanced. They particularly employed technical approaches, notably for data security and data protection . There was repeated emphasis on human oversight, and several of the companies offered training and education. In their attempts to balance competing goods, they sometimes sought organisational structures such as public-private partnerships that could help them find shared positions.

The research at the organisational level showed that public and private organisations in Europe take AI ethics very seriously, even though the sample size was not sufficient to make this claim broadly. The organisations engaged with the topic proactively and were considering or already had in place several measures to address ethical challenges. It is noticeable that these measures focused on a subset of the ethical issues described earlier, notably on the specific issues arising from machine learning and in particular those that were already well regulated, such as data protection .

Similarly, the organisations in the sample did not make use of the entire breadth of organisational strategies suggested by the literature. They were not part of any industry associations that aimed to influence the AI ethics environment. And while they probably had organisational risk management or impact assessment structures, these were not highlighted as key to addressing the ethics of AI. Stakeholder engagement was a prominent tool in their inventory. And while they recognised the importance of human rights , they did not make use of formalised methods for the integration of human rights into their processes.

To summarise, one can say that empirical findings from work with organisations suggest that, despite a high level of interest and awareness of AI ethics , there are still numerous options that could be used more widely and there is ample room for development.

This leads to the next point, which is the question of what individuals within and outside organisations can do in order to better understand the ethical issues of AI, and which activities can be undertaken in order to deal with such issues effectively.

5.3 Guidance Mechanisms

The term “guidance mechanisms” is used to describe the plethora of options and support mechanisms that are meant to help individuals and organisations navigate the waters of AI ethics , a very dynamic environment with many actors contributing to the debate and providing tools.

This section presents a brief overview of some of the current activities. It sets out to provide an illustration of some of the options that are available and that complement the policy and organisational level activities. The guidance mechanisms are not independent of policy and organisational options, but often underpin them or result from them and offer ways of implementing them. Some of the guidance mechanisms listed here predate the AI ethics debate but are applicable to it, whereas others have been created in direct response to AI ethics .

The first set of mechanisms consists of guidelines that aim to help users navigate the AI ethics landscape. The most prominent of these from a European perspective were developed by the High Level Expert Group on AI (2019) that was assembled by the European Commission . These guidelines stand out because of their direct link to policymakers, and they are likely to strongly influence European-level legislation on AI. They are by no means the only set of guidelines . Jobin et al. (2019) have identified 84 sets of AI ethics guidelines. In a related study an additional nine sets of guidelines were found (Ryan and Stahl 2020). And there is no doubt that the production of guidelines continues, so that by the time these words are seen by a reader, there will be more.

Jobin et al (2019) do an excellent job of providing an overview of the guidelines landscape and the common themes and threads that pervade it. They also show that there are common themes that cut across them, and are good ways of highlighting key principles and spelling out general expectations. The guidelines have a number of shortcomings, though. They tend to be high-level and therefore not to provide immediately actionable advice. The EU’s High Level Expert Group (2020) is therefore developing more applicable tools.

In addition to questions of implementation, there are several more general concerns about guidelines . The large number of these guidelines and their underlying initiatives can cause confusion and ambiguity (Floridi and Cowls 2019). There is a suspicion that they may be dominated by corporate interests (Mittelstadt 2019), a concern that has been prominently voiced by members of the High Level Expert Group (Metzinger 2019). Guidelines can be interpreted as examples of ethics washing or of avoiding legislation (Hagendorff 2019), as noted earlier.

Notwithstanding their disadvantages, ethics guidelines and frameworks are likely to remain a key aspect of the AI ethics debate. Some of them are closely connected with professional bodies and associations, which can help in the implementation phase. Some professional bodies have provided specific guidance on AI and ethics (IEEE 2017, USACM 2017). In addition, they often include AI ethics questions as part of their broader remit. The Association for Computing Machinery (ACM), for example, the largest professional body in computing, has recently refreshed its code of conduct with a view to ensuring that it covers current challenges raised by AI (Brinkman et al. 2017). While professionalism may well have an important role to play in AI ethics , one important obstacle is that in computing, including AI, professionalism is much less well developed than in other areas, such as medicine and law, where professional governance has powers of enforcement that are missing in computing (Mittelstadt 2019).

Professional bodies often contribute to standardisation and, in some cases, are the owners of standards. In the area of AI there are currently several standardisation activities, notably ISO/IEC JTC 1/SC 42,Footnote 4 which includes some references to ethical issues . The most prominent standardisation efforts in terms of the ethical aspects of AI is being undertaken by the Institute of Electrical and Electronics Engineers (IEEE) in its P7000 family of standardsFootnote 5 (Peters et al. 2020).

Standardisation can be linked to certification, something that the IEEE has pioneered with its ethics certification programme for autonomous and intelligent systems.Footnote 6 Standards can be made highly influential in various ways. One way is to legally require certification against a standard. This seems to be a key idea currently proposed by the European Commission (2020a) in its AI White Paper. If implemented, it would mean that AI systems of a pre-defined significant risk level would need to undergo certification to ensure ethical issues are appropriately addressed, an idea that appears to have significant support elsewhere (Krafft et al. 2020).

Standardisation can also influence or drive other activities by defining requirements and activities. A well-established example of this is standardisation in information security, where the ISO 27000 series defines best practice. Standardisation can provide technical and organisational guidance on a range of issues. The IEEE P7000 series is a good example. It aims to provide standardisation for specific issues such as privacy (P7002), algorithmic biases (P2003), safety (P7009) and transparency (P7001).

One type of guidance mechanism that standardisation can help with, but that can also draw on other long-standing sources, is development methodologies. This is the topic of IEEE P7000 (model process for addressing ethical concerns during systems design). The idea that ethical issues can and should be considered early on during the development process is now well established, and is an attempt to address the so-called Collingridge (1981) dilemma or the dilemma of control (see box).

The Collingridge Dilemma

Collingridge observed that it is relatively easy to intervene and change the characteristics of a technology early in its life cycle. However, at this point it is difficult to predict its consequences. Later, when the consequences become more visible, it is more difficult to intervene. This is a dilemma for those wanting to address ethical issues during the development process.

The Collingridge dilemma is not confined to AI. In the field of computing it is compounded by the interpretive flexibility and logical malleability of computing technologies, which are clearly features of AI as well. While the uncertainty about future uses of systems remains a fundamental issue that is impossible to resolve, there have been suggestions for how to address it at least to some degree. Many of these suggestions refer to development methodologies, and most go back to some type of value-sensitive design (Friedman et al. 2008, van Wynsberghe 2013). The idea behind these methodologies is generally to identify relevant values that should inform the development and use of a technology and then engage with relevant stakeholders in discussion on how this can be achieved.

The most prominent example of such a methodology is that of privacy by design (ICO 2008, Cavoukian 2009), which the GDPR now mandates under some circumstances as data protection by design (Hansen 2016). Attempts have been made to move beyond the specific issue of privacy and its implementation via data protection and to identify broader issues through ethics by design (Martin and Makoundou 2017, Beard and Longstaff 2018, Dignum et al. 2018).

Proposals for development methodologies also cover specific steps of the development life cycle, such as systems testing, for example through ethics penetration testing, an idea taken from computer security practice (Berendt 2019), or adversarial testing (WEF 2019). Winfield and Jirotka (2018) suggest transferring the idea of a black box, well known from the aviation industry, to autonomous systems. This would allow tracing of the course of events in the case of an incident, just as an aeroplane’s black box helps us understand the cause of an air traffic accident. In addition there are now development methodologies that specifically aim to address the ethics of AI, such as the VCIO (Values, Criteria, Indicators, Observables) model suggested by the AIEI Group (Krafft et al. 2020) or the Virginia Dignum’s ART principles for responsible AI (Accountability, Responsibility, Transparency) (Dignum 2019).

In addition, there is a rapidly growing set of tools to address various aspects of AI ethics (Morley et al. 2019). These are published by groups associated with research funders such as the Wellcome Data Lab (Mikhailov 2019), while others originate from non-governmental and civil society organisations, such as Doteveryone and its consequence scanning kit. (TechTransformed 2019). Yet others are based at universities, such as the AI Now Institute, which published an algorithmic impact assessment (Reisman et al. 2018), and yet more come from professional organisations such as the UK Design Council’s Double Diamond (Design Council n.d.). Finally, some sets of tools to address AI ethics originate from companies like PWC, which published a practical guide to responsible artificial intelligence (PWC 2019).

In addition to these guidance mechanisms aimed specifically at providing support for dealing with the ethics challenges of AI, there are many further options originating in activities of science and technology research and reflection that can form part of the broader discourse of how to support AI ethics . These include activities such as the anticipation of future technologies and their ethical issues , some of which are closely linked to digital technology (Brey 2012, Markus and Mentzer 2014, Nordmann 2014), but they can also draw on the broader field of future and foresight studies (UNIDO 2005, Sardar 2010). Stakeholder dialogue and public engagement constitute another huge field of activity that will play a central role in AI ethics , drawing on large amounts of prior work to provide many methodologies (Engage2020 n.d.). A final point worth mentioning here is education, which plays a key role in many of the mitigation options. Teaching, training, awareness raising and educating are cornerstones of facilitating a political discourse and reaching policymakers, but also of eliciting a sense of responsibility from AI developers and deployers.

Table 5.1 summarises the mitigation options discussed in this chapter.

Table 5.1 Overview of options to mitigate ethical issues in AI

The table represents the ways in which AI ethics may be addressed, highlighting the topics mentioned in the text above. It illustrates key options but cannot claim that all strategies are covered, nor that the individual options available for a particular branch are exhaustive. In fact, many of these, for example the AI ethics tools, and the AI ethics frameworks, embrace dozens if not hundreds of alternatives. Highlighting key strands of current debate demonstrates the richness of the field. One final point that adds to the complexity is the set of stakeholders involved, which I will now address.

5.4 AI Ethics Stakeholders

As noted earlier, and following Freeman and Reed (1983), stakeholders are individuals or groups who are significantly affected by an action or potentially at risk. The concept is extensively used in the organisational literature to help organisations identify whom they need to consider when taking decisions or acting (Donaldson and Preston 1995, Gibson 2000).

There are methodologies for stakeholder identification and engagement which allow for a systematic and comprehensive analysis of stakeholders, including specific stakeholder analysis methods for information systems (Pouloudi and Whitley 1997). One challenge with regard to the identification of stakeholders of AI is that, depending on the meaning of the term “AI” used and the extent of the social consequences covered, most if not all human beings, organisations and governmental bodies are stakeholders. In this context the term loses its usefulness, as it no longer helps analysis or allows conclusions to be drawn.

It is nevertheless useful for the purposes of this book to consider AI stakeholders, as a review of stakeholders informs the overall understanding of the AI landscape and provides important support for the use of the ecosystems metaphor to describe AI. I therefore offer a brief overview of key stakeholder groups and categories, indicating their interests or possible actions, which will be referred to later during the discussion of how AI ecosystems can be shaped.

The categorisation I propose is between policy-oriented bodies, other organisations and individuals. These three groups have different roles in shaping, maintaining and interacting within AI ecosystems . Figure 5.2 gives an overview of the three main groups, including examples of the stakeholders who constitute them. The figure takes the form of a Venn diagram in order to indicate that the different groups are not completely separate but overlap considerably. An individual user, for example, may work in a stakeholder organisation and also be part of standardisation and policy development.

The first stakeholder category in the figure relates to policy. Policymakers and institutions that set policies relevant to AI, including research policy and technology policy, but also other relevant policy, such as policies governing liability regimes, have an important role in shaping how ethical and human rights issues concerning AI can be addressed. This includes international organisations such as the UN, the OECD and their subsidiary bodies, such as UNESCO and the International Telecommunication Union.

The European Union is highly influential in shaping policy within the EU member states, and many of its policies are complied with by non-EU policy bodies. International policy is important because it can drive national policy, where legally binding policy in terms of legislation and regulation is typically located. National parliaments and governments thus play a key role in all policy relating to AI ethics . Regulatory bodies that oversee the implementation of regulation also tend to be situated at national level. Further public bodies that are key stakeholders in AI debates are research funding bodies, which can translate policy into funding strategies and implementation requirements. These are often part of public bodies, but they can also be separately funded, as in the case of charitable funders. In Figure 5.3, I have situated ethics bodies in the category of policy. These ethics bodies include the EU’s High-Level Expert Group on AI, and also national ethics committees and research ethics committees or institutional review boards, which translate general principles into research practice and oversee detailed implementation at the project level.

Fig. 5.3
figure 3

Overview of AI stakeholders

The second stakeholder category suggested in Figure 5.3 is that of organisations. This group includes numerous and often very different members. It could easily be broken down further into sub-categories. For the purpose of this book, the members of this second category are categorised by the fact that, as organisations, they have some level of internal structure and temporal continuity and their main purpose is not to develop or implement international or governmental policies.

Key members of this stakeholder category are commercial organisations that play a role in the development, deployment and use of AI. This includes not only companies that develop and deploy AI on a commercial basis, but also users and companies that have a special role to play, for example insurance, which facilitates and stabilises liability relationships.

There are numerous organisations that are not in the business of making profits from AI but are involved in the AI value chain, in particular professional bodies, standardisation bodies and educational institutions. These must be included because they have an obvious relationship to some of the mitigation strategies discussed earlier, notably the use of professional standards, the integration of ethical considerations into standards, and the raising of awareness and knowledge through education. Similarly, media organisations play a crucial role in raising awareness of ethical issues and driving public discourse, which in turn may motivate policy development.

The third and final category of stakeholders in this overview is individuals. Policy bodies and organisations are made up of individuals and would cease to exist without individual members. In the category of individuals it is nevertheless important to highlight that there are individuals with characteristics that may not be covered or represented in the other stakeholder groups who still have a legitimate claim to be heard.

Some of these individual stakeholders correspond to organisational stakeholder groups. A developer may be a large profit-driven company, but AI applications can also be developed by a hobby technologist who has the expertise to build novel ideas or applications. Similarly, there are corporate end users of AI, but these tend to have different interests and motivations from individual end users. Activists and lay experts, however, tend to contribute to the public discourse and thereby shape perceptions. But maybe the most important individual stakeholders, because they are most easily overlooked, are those who are characterised by the fact that they do not have an active interest in AI and do not seek to shape it, but are still affected by its existence. These may be individuals who do not have access to AI technologies or facilitating technologies and are therefore excluded from possible benefits, thus raising the problem of digital divides. Another example would be individuals who are subjected to the impact of AI but have no voice, no input and no choice in the matter. Examples of this group would be prisoners whose parole decisions are made using AI or patients whose diagnosis and treatment depend on AI. These are groups that stand to substantially benefit from AI or possibly suffer from it, and thus fulfil the definition of “stakeholder”, but often have no way of making their voices heard.

The point of this overview of AI stakeholders has been to demonstrate the complexity of the stakeholder population and indicate the manifold and often contradictory interests that these stakeholders may have. This view of the stakeholder distribution adds to the earlier views of the definition of “AI”, the review of ethical issues and mitigation strategies . It shows that there is no simple and straightforward way to drive change and promote flourishing. It is important, for example, to understand that AI can lead to biases and discrimination and that the workings of AI may be non-transparent. But, in order to come to a sound ethical assessment, one needs to understand the detailed working of the technology in its context of use.

In order to say anything useful about how ethical aspects of AI at a more general level can be understood, evaluated and dealt with, it is therefore important to take a different perspective: one that allows us to look at all the various aspects and components, but also allows for a higher-level overview. I therefore suggest that the ethics of AI debate could benefit from a systems-level view, which I introduce in the next chapter.