Introduction

As the availability of data on almost every aspect of life, and the sophistication of machine learning (ML) techniques, has increased (Lepri et al. 2018) so have the opportunities for improving both public and private life (Floridi and Taddeo 2016). Society has greater control than it has ever had over outcomes related to: (1) who people can become; (2) what people can do; (3) what people can achieve; and (4) how people can interact with the world (Floridi et al. 2018). However, growing concerns about the ethical challenges posed by the increased use of ML in particular, and Artificial Intelligence (AI) more generally, threaten to put a halt to the advancement of beneficial applications, unless handled properly.

Balancing the tension between supporting innovation, so that society’s right to benefit from science is protected (Knoppers and Thorogood 2017), and limiting the potential harms associated with poorly-designed AI (and specifically ML in this context), (summarised in Table 1) is challenging. ML algorithms are powerful socio-technical constructs (Ananny and Crawford 2018), which raise concerns that are as much (if not more) about people as they are about code (see Table 1) (Crawford and Calo 2016). Enabling the so-called dual advantage of ‘ethical ML’—so that the opportunities are capitalised on, whilst the harms are foreseen and minimised or prevented (Floridi et al. 2018)—requires asking difficult questions about design, development, deployment, practices, uses and users, as well as the data that fuel the whole life-cycle of algorithms (Cath et al. 2018). Lessig was right all along: code is both our greatest threat and our greatest promise (Lessig and Lessig 2006).

Table 1 Ethical concerns related to algorithmic use based on the ‘map’ created by Mittelstadt et al. (2016)

Rising to the challenge of designing ‘ethical ML’ is both essential and possible. Indeed those that claim that it is impossible are falling foul of the is-ism fallacy where they confuse the way things are with the way things can be (Lessig and Lessig 2006), or indeed should be. It is possible to design an algorithmically-enhanced society pro-ethicallyFootnote 1 (Floridi 2016b), so that it protects the values, principles, and ethics that society thinks are fundamental (Floridi 2018). This is the message that social scientists, ethicists, philosophers, policymakers, technologists, and civil society have been delivering in a collective call for the development of appropriate governance mechanisms (D’Agostino and Durante 2018) that will enable society to capitalise on the opportunities, whilst ensuring that human rights are respected (Floridi and Taddeo 2016), and fair and ethical decision-making is maintained (Lipton 2016).

The purpose of the following pages is to highlight the part that technologists, or ML developers, can have in this broader conversation, and to highlight where further research is urgently needed. Specifically, section ‘Moving from Principles to Practice’ discusses how efforts to data have been too focused on the ‘what’ of ethical AI (i.e. debates about principles and codes of conduct) and not enough on the ‘how’ of applied ethics. The ‘Methodology’ section outlines the research planned to contribute to closing this gap between principles and practice, through the creation of an ‘applied ethical AI typology,’ and the methodology for its creation. Section ‘Framing the results,’ provides the theoretical framework for interpreting the results. The ‘Discussion of initial results’ section summarises what the typology shows about the uncertain utility of the tools and methods identified as well as their uneven distribution. The section on ‘A way forward’ argues that there is a need for a more coordinated effort, from multi-disciplinary researchers, innovators, policymakers, citizens, developers and designers, to create and evaluate new tools and methodologies, in order to ensure that there is a ‘how’ for every ‘what’ at each stage of the Machine Learning pipeline. The penultimate section lists some of the limitations of this study. Finally, the last section, concludes that the suggested recommendations will be challenging to achieve, but it would be imprudent not to try.

Moving from Principles to Practices

On 22nd May 2019, the Organisation for Economic Co-operation and Development (OECD) announced that its thirty-six member countries, along with an additional six (Argentine, Brazil, Columbia, Costa Rica, Peru, and Romania), had formally agreed to adopt, what the OECD claims to be the first intergovernmental standard on Artificial Intelligence (AI) (OECD 2019a). Designed to ensure AI systems are robust, safe, fair and trustworthy, the standard consists of five complementary value-based principles, and five implementable recommendations to policymakers.

The values and recommendations are not new. Indeed, the OECD’s Recommendation of the Council on Artificial Intelligence (OECD 2019b) is only the latest among a list of more than 70 documents, published in the last 3 years, which make recommendations about the principles of the ethics of AI (Spielkamp et al. 2019; Winfield 2019). This list includes documents produced by industry (Google,Footnote 2 IBM,Footnote 3 Microsoft,Footnote 4 IntelFootnote 5), Government (Montreal Declaration,Footnote 6 Lords Select Committee,Footnote 7 European Commission’s High-Level Expert GroupFootnote 8), and academia (Future of Life Institute,Footnote 9 IEEE,Footnote 10 AI4PeopleFootnote 11). The hope of the authors of these documents is that the principles put forward, can, as abstractions (Anderson and Anderson 2018), act as normative constraints (Turilli 2007) on the ‘do’s’ and ‘don’ts’ of algorithmic use in society.

As Jobin et al. (2019) and Floridi (2019c) point out, this intense interest from such a broad range of stakeholders reflects not only the need for ethical guidance, but also the desire of those different parties to shape the ‘ethical AI’ conversation around their own priorities. This is an issue that is not unique to debates about the components of ethical ML, but something that the international human rights community has grappled with for decades, as disagreements over what they are, how many there are, what they are for, as well as what duties they impose on whom, and which values of human interests they are supposed to protect (Arvan 2014), have never been resolved. It is significant, therefore, that there seems to be an emerging consensus amongst the members of the ethical ML community with regards to what exactly ethical ML should aspire to be.

A review of 84 ethical AI documents by Jobin et al. (2019) found that although no single principle featured in all of them, the themes of transparency, justice and fairness, non-maleficence, responsibility and privacy appeared in over half. Similarly, a systematic review of the literature on ethical technology revealed that the themes of privacy, security, autonomy, justice, human dignity, control of technology and the balance of powers, were recurrent (Royakkers et al. 2018). As argued by, taken together these themes ‘define’ ethically-aligned ML as that which is (a) beneficial to, and respectful of, people and the environment (beneficence); (b) robust and secure (non-maleficence); (c)respectful of human values (autonomy); (d) fair (justice); and (e) explainable, accountable and understandable (explicability). Given this emergent consensus in the literature, it is unsurprising that these are also the themes central to the OECD standard. What is perhaps more surprising s that this agreement around the basic principles that ethical ML should meet is no longer limited to Europe and the Western world. Just three days after the OECD publication, the Beijing Academy of Artificial Intelligence (BAAI), an organisation backed by the Chinese Ministry of Science and technology and the Beijing municipal government, released its fifteen AI principles for: (a) research and development; (b) use; and (c) the Governance of AI (Knight 2019), which when read in full, bear remarkable similarity to the common framework (see Table 2).

Table 2 Comparison of ethical principles in recent publications demonstrating the emerging consensus of ‘what’ ethical AI should aspire to be

This fragileFootnote 12 consensus means that there is now the outline of a shared foundation upon which one can build, and that can be used as a benchmark to communicate expectations and evaluate deliverables. Co-design in AI would be more difficult without this common framework. It is, therefore, a necessary building block in the creation of an environment that fosters ethical, responsible, and beneficial ML, especially as it also indicates the possibility of a time when the distractive risk of ethics shoppingFootnote 13 (Floridi 2019c) will be lessened. Yet, challenges remain.

The availability of these ‘agreed’ principles supports but does not yet bring about actual change in the design of algorithmic systems (Floridi 2019a). As (Hagendorff 2019) notes, almost all of the guidelines that have been produced to date suggest that technical solutions exist, but very few provide technical explanations. As a result, developers are becoming frustrated by how little help is offered by highly abstract principles when it comes to the ‘day job’ (Peters and Calvo 2019). This is reflected in the fact that 79% of tech workers report that they would like practical resources to help them with ethical considerations (Miller and Coldicott 2019). Without this more practical guidance, other risks such as ‘ethics bluewashing’Footnote 14 and ‘ethics shirking’Footnote 15 remain (Floridi 2019c).

Such risks, associated with a lack of practical guidance on how to produce ethical ML, make it clear that the ethical ML community needs to embark on the second phase of AI ethics: translating between the ‘what’ and the ‘how.’ This is likely to be hard work. The gap between principles and practice is large, and widened by complexity, variability, subjectivity, and lack of standardisation, including variable interpretation of the ‘components’ of each of the ethical principles (Alshammari and Simpson 2017). Yet, it is not impossible if the right questions are asked (Green 2018; Wachter et al. 2017) and closer attention is payed to how the design process can influence (Kroll 2018) whether an algorithm is more or less ‘ethically-aligned.’

The sooner we start doing this, the better. If we do not take on the challenge and develop usable, interpretable and efficacious mechanisms (Abdul et al. Kankanhalli 2018) for closing this gap, the lack of guidance may (a) result in the costs of ethical mistakes outweighing the benefits of ethical success (even a single critical ‘AI’ scandal could stifle innovation): (b) undermine public acceptance of algorithmic systems; (c) reduce adoption of algorithmic systems; and (d) ultimately create a scenario in which society incurs significant opportunity costs (Cookson 2018). Thus, the aim of this research project is to identify the methods and tools already available to help developers, engineers, and designers of ML reflect on and apply ‘ethics’ (Adamson et al. 2019) so that they may know not only what to do or not to do, but also how to do it, or avoid doing it (Alshammari and Simpson 2017). We hope that the results of this research may be easily applicable to other branches of AI.

Methodology

With the aim of identifying the methods and tools available to help developers, engineers and designers of ML reflect on and apply ‘ethics’ in mind, the first task was to design a typology, for the very practically minded ML community (Holzinger 2018), that would ‘match’ the tools and methods identified to the ethical principles outlined in Table 2 (summarised as beneficence, non-maleficence, autonomy, justice, and explicability).

To create this typology, and inspired by Saltz and Dewar (2019) who produced a framework that is meant to help data scientists consider ethical issues at each stage of a project, the ethical principles were combined with the stages of algorithmic development outlined in the overview of the Information Commissioner’s Office (ICO) auditing framework for Artificial Intelligence and its core components,Footnote 16 as shown in Table 3. The intention is that this encourages ML developers to go between decision and ethical principles regularly.

Table 3 ‘Applied AI Ethics’ Typology comprising ethical principles and the stages of algorithmic development

The second task was to identify the tools and methods, and the companies or individuals researching and producing them, to fill the typology. There were a number of different ways this could have been done. For example, Vakkuri et al. (2019) sought to answer the question ‘what practices, tools or methods, if any, do industry professionals utilise to implement ethics into AI design and development?’ by conducting interviews at five companies that develop AI systems in different fields. However, whilst analysis of the interviews revealed that the developers were aware of the potential importance of ethics in AI, the companies seemed to provide them with no tools or methods for implementing ethics. Based on a hypothesis that these findings did not imply the non-existence of applied-ethics tools and methods, but rather a lack of progress in the translation of available tools and methods from academic literature or early-stage development and research, to real-life use, this study used the traditional approach of providing an overarching assessment of a research topic, namely a literature review (Abdul et al. 2018).

Scopus,Footnote 17 arXivFootnote 18 and PhilPapers,Footnote 19 as well as Google search were searched. The Scopus, arXiv and Google Search searches were conducted using the terms outlined in Table 4. The PhilPapers search was unstructured, given the nature of the platform, and instead the categories also shown in Table 4 were reviewed. The original searches were run in February 2019, but weekly alerts were set for all searches and reviewed up until mid-July 2019. Every result (of which there were originally over 1000) was checked for relevance—either in terms of theoretical framing or in terms of the use of the tool—actionability by ML developers, and generalisability across industry sectors. In total, 425 sourcesFootnote 20 were reviewed. They provide a practical or theoretical contribution to the answer of the question: ‘how to develop an ethical algorithmic system.Footnote 21

Table 4 Showing the search terms used to search Scopes, arXiv and Google and the categories reviewed on PhilPapers

The third, and final task, was to review the recommendations, theories, methodologies, and tools outlined in the reviewed sources, and identify where they may fit in the typology. To do this, each of the high-level principles (beneficence, non-maleficence, autonomy, justice and explicability) were translated into tangible system requirements that reflect the meaning of the principles. This is the approach taken by the EU’s High Level Ethics Group for AI and outlined in Chapter II of Ethics Guidelines for Trustworthy AI: Realising Trustworthy AI which “offers guidance on the implementation and realisation of Trustworthy AI, via a list of (seven) requirements that should be met, building on the principles” (p. 35 European Commission 2019).

This approach is also used in the disciplinary ethical guidance produce for internet-mediated researchers by the Belmont Report (Anabo et al. 2019), and by La Fors et al. (2019) who sought to integrate existing design-based ethical approaches for new technologies by matching lists of values the practical abstraction from mid-level ethics (principles) to what (Hagendorff 2019) calls ‘microethics.’ This translation is a process that gradually reduces the indeterminacy of abstract norms to produce desiderata for a ‘minimum-viable-ethical-(ML)product’ (MVEP) that can be used by people who have various disciplinary backgrounds, interests and priorities (Jacobs and Huldtgren 2018). The outcome of this translation process is shown in Table 5.

Table 5 showing the connection between high-level ethical principles and tangible system requirements as adapted from the methodology outlined in Chapter II of the European Commission’s “Ethics Guidelines for Trustworthy AI”

Framing the Results

The full typology is available here http://tinyurl.com/appliedaiethics. The purpose of presenting it is not to imply that it is ‘complete,’ nor that the tools and methodologies highlighted are the best, or indeed the only, means of ‘solving’ each of the individual ethical problems. How to apply ethics to the development of ML is an open question that can be solved in a multitude of different ways at different scales and in different contexts (Floridi 2019a). It would, for example, be entirely possible to complete the process using a different set of principles and requirements. Instead, the goal is to provide a synthesis of what tools are currently available to ML developers to encourage the progression of ethical AI from principles to practice and to signal clearly, to the ‘ethical AI’ community at large, where further work is needed.

Additionally, the purpose of presenting the typology is not to give the impression that the tools act as means of translating the principles into definitive ‘rules’ that technology developers should adhere to, or that developers must always complete one ‘task’ from each of the boxed. This only promotes ethics by ‘tick-box’ (Hagendorff 2019). Instead, the typology is intended to eventually be an online searchable database so that developers can look for the appropriate tools and methodologies for their given context, and use them to enable a shift from a prescriptive ‘ethics-by-design’ approach to a dialogic, pro-ethical design approach (Anabo et al. 2019; Floridi 2019b).

In this sense, the tools and methodologies represent a pragmatic version of Habermas’s discourse ethicsFootnote 22 (Mingers and Walsham 2010). In his theory, Habermas (1983, 1991) argues that morals and norms are not ‘set’ in a top-down fashion but emerge from a process where those with opposing views, engage in a process where they rationally consider each other’s arguments, give reasons for their position and, based upon the greater understanding that results, reassess their position until all parties involved reach a universally agreeable decision (Buhmann et al. 2019). This is an approach commonly used in both business and operational research ethics, where questions of ‘what should we do?’ (as opposed to what can we do?) arise (Buhmann et al. 2019; Mingers 2011). This is a rationalisation process that involves a fair consideration of the practical, the good and the just, and normally relies heavily on language (discussion), for both the emergence of agreed upon norms or standards, and their reproduction. In the present scenario of developers rationalising ML design decisions to ensure that they are ethically-optimised, the tools and methods in the typology replace the role of language and act as the medium for identifying, checking, creating and re-examining ideas and giving fair consideration to differing interests, values and norms (Heath 2014; Yetim 2019). For example, the data nutrition tool (Holland et al. 2018) provides a means of prompting a discussion and re-evaluation of the ethical implications of using a specific dataset for an ML development project, and the audit methodologies of (Diakopoulos 2015) ensure that external voices, who may have an opposing view as to whether or not an ML-system in use is ethically-aligned, have a mechanism for questioning the rational of design decisions and requesting their change if necessary. It is within this frame that we present an overview of our findings in the next section.

Discussion of Initial Results

Interpretation of the results of the literature review and the resulting typology are likely to be context specific. Those with different disciplinary backgrounds (engineering, moral philosophy, sociology etc.) will see different patterns, and different meanings in these patterns. This kind of multidisciplinary reflection on what the presence or absence of different tools and methods, and their function, might mean is to be encouraged. To start the conversation, this section highlights the following three headings:

  1. 1.

    an overreliance on ‘explicability’;

  2. 2.

    a focus on the need to ‘protect’ the individual over the collective; and

  3. 3.

    a lack of usability

They are interrelated, but for the sake of simplicity, let us analyse each separately.

Explicability as the All-Encompassing PrincipleFootnote 23

To start with the most obvious observation: the availability of tools and methods is not evenly distributed across the typology, either in terms of the ethical principles or in terms of the stages of development. For example, whilst a developer looking to ensure their ML algorithm is ‘non-maleficent’ has a section of tools available to them for each development stage—as highlighted in Table 6—the tools and methods designed to enable developers to meet the principle or ‘beneficence’ are almost all intended to be used during the initial planning stages of development (i.e. business and use-case development design phases). However, the most noticeable ‘skew’ is towards post hoc explanations; with those seeking to meet the principle of explicability during the testing phase having the greatest range of tools and methods from which to choose.

Table 6 Applied AI ethics typology with illustrative non-maleficence example. A developer looking to ensure their ML solutions meets the principle of non-maleficence can start with the foundational principles of privacy by design (Cavoukian et al. 2010) to guide ideation appropriately, use techniques such as data minimisation (Antignac et al. 2016), training for adversarial robustness (Kolter and Madry 2018), and decision-making verification (Dennis et al. 2016) in the train-build-test phases, and end by launching the system with an accompanying privacy audit procedure (Makri and Lambrinoudakis 2015)

There are likely to be several reasons for this, but two stand out. The first and simpler is that the ‘problem’ of ‘interpreting’ an algorithmic decision seems tractable from a mathematical standpoint, so the principle of explicability has come to be seen as the most suitable for a technical fix (Hagendorff 2019). The second is that ‘explicability’ is not, from a moral philosophy perspective, a moral principle like the other four principles. Instead, it can be seen as a second order principle, that has come to be of vital importance in the ethical-ML community because, to a certain extent, it is linked with all the other four principles.Footnote 24 Indeed, it is argued that if a system is explicable (explainable and interpretable) it is inherently more transparent and therefore more accountable in terms of its decision-making properties and the extent to which they include human oversight and are fair, robust and justifiable (Binns et al. 2018; Cath 2018; Lipton 2016).

Assuming temporarily that this is indeed the case,Footnote 25 and that by dint of being explicable an ML system can more easily meet the principles of beneficence, non-maleficence, autonomy and justice, then the fact that the ethical ML community has focused so extensively on developing tools for explanations may not seem problematic. However, as the majority of tools and methods that sit in the concentration at the intersection of explicability and testing are primarily statistical in nature, this would be a very mechanistic view because such ‘solutions’—e.g. LIME (Ribeiro et al. 2016), SHAP (Lundberg and Lee 2017), Sensitivity Analysis (Oxborough et al. 2018)—do not really succeed in helping developers provide meaningful explanations (Edwards and Veale 2018) that give individuals greater control over what is being inferred about them from their data. As such, the existence of these tools is at most necessary but not sufficient.

From a more humanistic, and realistic perspective, in order to satisfy all the five principles a system needs to be designed from the very beginning to be a transparent sociotechnical system (Ananny and Crawford 2018). To achieve this level of transparency, accountability or explicability, it is essential that those analysing a system are able to “understand what it was designed to do, how it was designed to do that, and why it was designed in that particular way instead of some other way” (Kroll 2018). This kind of scrutiny will only be possible through a combination of tools or processes that facilitate auditing, transparent development, education of the public, and social awareness of developers (Burrell 2016). As such, there should ideally be tools and methods available for each of the boxes in the typology, accepting that there may be areas of the typology which are more significant for ML practitioners than others.

Furthermore, available of tools and methods in a variety of typology areas is also important in the context of culturally and contextually specific ML ethics. Not all of the principles will be of equal importance in all contexts. For example, in the case of national security systems non-maleficence may be of considerably higher importance than explicability. If the community prioritises the development of tools and methods for one of the principles over the others, it will be denying itself the opportunity for such flexibility.

An Individual Focus

The next observation of note is that few of the available tools surveyed provide meaningful ways to assess, and respond to, the impact that the data-processing involved in their ML algorithm has on an individual, and even less on the impact on society as a whole (Poursabzi-Sangdeh et al. 2018). This is evident from the very sparsely populated ‘deployment’ column of the typology. Its emptiness implies that the need for pro-ethically designed human–computer interaction (at an individual level) or networks of ML systems (at a group level) has been paid little heed. This is likely because it is difficult to translate complex human behaviour into design tools that are simple to use and generalisable.

This might not seem particularly important, but the impact this has on the overall acceptance of AI in society could be significant. For example, it is unlikely that counterfactual explanationsFootnote 26 (i.e. if input variable x had been different, the output variable y would have been different as well)—although important for many reasons—will be sufficient to improve the interpretability of recommendations made by black-box systems for the average member of the public or the technical community. If such methods become the de facto means of providing explanations, the extent to which the ‘algorithmic society’ is interpretable to the general public will be very limited. And counterfactual explanations could easily be embraced by actors uninterested in providing factual explanations, because the counterfactual ones provide a vast menu of options, which may easily decrease the level of responsibility of the actor choosing it. For example, if a mortgage provider does not offer a mortgage, the factual reasons may be a bias, for example the gender of the applicant, but the provider could choose from a vast menu of innocuous, counterfactual explanations—if some variable x had been different the mortgage might have been provided—e.g., a much higher income, more collaterals, lower amount, and so forth, without ever mentioning the factual cause, i.e. the gender of the applicant. All this could considerably limit the level of trust people are willing to place in such systems.

This potential threat to trust is further heightened by the fact that the lack of attention paid to impact means that ML developers are currently hampered in their ability to develop systems that promote user’s (individual or group’s) autonomy. For example, currently there is an assumption that prediction = decision, and little research has been done (in the context of ML) on how people translate predictions into actionable decisions. As such, tools that, for example, help developers pro-ethically design solutions that do not overly restrict the user’s options in acting on this prediction (i.e. tools that promote the user’s autonomy) are in short supply (Kleinberg et al. 2017). F users feel as though their decisions are being curtailed and controlled by systems that they do not understand, it is very unlikely that these systems will meet the condition of social acceptability, never mind the condition of social preferability which should be the aim for truly ethically designed ML (Floridi and Taddeo 2016).

A Lack of Usability

Finally, the tools and methods included in the typology are positioned as discourse aids, designed to facilitate and document rational decisions about trade-offs in the design process that may make an ML system more or less ethically-aligned. It is possible to see the potential for the tools identified to play this role. For example, at the “beneficence → use-case → design” intersection, there are a number of tools highlighted to help elicit social values. These include the responsible research and innovation methodology employed by the European Commission’s Human Brain Project (Stahl and Wright 2018), the field guide to human-centred design (ideo.org 2015) and Involve and DeepMind’s guidance on stimulating effective public engagement on the ethics of Artificial Intelligence (Involve & DeepMind 2019). Such tools and methods could be used to help designers pro-ethically deal with value pluralism (i.e. variation in values across different population groups). However, the vast majority of these tools and methods are not actionable as they offer little help on how to use them in practice (Vakkuri et al. 2019). Even when there are open-source code libraries available, documentation is often limited, and the skill-level required for use is high.

This overarching lack of usability of the tools and methods highlighted in the typology means that, although they are promising, they require more work before being ‘production-ready.’ As a result, applying ethics still requires considerable amounts of effort on behalf of the ML developers undermining one of the main aims of developing and using technologically-based ‘tools’: to remove friction from applied ethics. Furthermore, until these tools are embedded in practice and tested in the ‘real world,’ it is extremely unclear what impact they will have on the overall ‘governability’ of the algorithmic ecosystem. For example Binns (2018a) asks how an accountable system actually will be held accountable for an ‘unfair’ decision in a way that is acceptable to all. This makes it almost impossible to measure the impact, ‘define success’, and document the performance (Mitchell et al. 2019) of a new design methodology or tool. As a result, there is no clear problem statement (and therefore no clear business case) that the ML community can use to justify time and financial investment in developing much-needed tools and techniques that truly enable pro-ethical design. Consequently, there is no guaranatee that the so-called discursive devices do anything other than help the groups in society who already have the loudest voices embed and protect their values in design tools, and then into the resultant ML systems.

A Way Forward

Social scientists (Matzner 2014) and political philosophers (from Rousseau and Kant, to Rawls and Habermas) (Binns 2018b), are used to dealing with the kind of plurality and subjectivity informing the entire ethical ML field (Bibal and Frénay 2016). Answering questions such as, what happens when individual level and group level ‘ethics’ interact, and what key terms such as ‘fairness,’ ‘accountability,’ ‘transparency’ and ‘interpretability’ actually mean when there are currently a myriad definitions (Ananny and Crawford 2018; Bibal and Frénay 2016; Doshi-Velez and Kim 2017; Friedler et al. 2016; Guidotti et al. 2018; Kleinberg et al. 2016; Overdorf et al. 2018; Turilli and Floridi 2009) is standard fare for individuals with social science, economy, philosophy or legal training. This is why (Nissenbaum 2004) argues for a contextual account of privacy, one that recognises the varying nature of informational norms (Matzner 2014) and (Kemper and Kolkman 2018) state that transparency is only meaningful in the context of a defined critical audience.

The ML developer community, in contrast, may be less used to dealing with this kind of difficulty, and more used to scenarios where there is at least a seemingly quantifiable relationship between input and output. As a result, the existing approaches to designing and programming ethical ML fail to resolve what (Arvan 2018) terms the moral-semantic trilemma, as almost all tools and methods highlighted in the typology are either too semantically strict, too semantically flexible, or overly unpredictable (Arvan 2018).

Bridging together multi-disciplinary researchers into the development process of pro-ethical design tools and methodologies will be essential. A multi-disciplinary approach will help the ethical ML community overcome obstacles concerning social complexity, embrace uncertainty, and accept that: (1) AI is built on assumptions; (2) human behaviour is complex; (3) algorithms can have unfair consequences; (4) algorithmic predictions can be hard to interpret (Vaughan and Wallach 2016); (5) trade-offs are usually inevitable; and (6) positive, ethical features are open to progressive increase, that is an algorithm can be increasingly fair, and fairer than another algorithm or a previous version, but makes no sense to say that it is fair or unfair n absolute terms (compare this to the case of speed: it makes sense to say that an object is moving quickly, or that it is fast or faster than another, but not that it is fast). The resulting collaborations are likely to be highly beneficial for the development of applied ethical tools and methodologies for at least three reasons.

First, it will help ensure that the tools and methods developed do not only protect value-pluralism in silico (i.e. the pluralistic values of developers) but also in society. Embracing uncertainty and disciplinary diversity will naturally encourage ML experts to develop tools that facilitate more probing and open (i.e. philosophical) questions (Floridi 2019b) that will lead to more nuanced and reasoned answers and hence decisions about why and when certain trade-offs, for example, between accuracy and interpretability (Goodman and Flaxman 2017), are justified, based on factors such as proportionality to risk (Holm 2019).

Second, it will encourage a more flexible and reflexive approach to applied ethics that is more in-keeping with the way ML systems are actually developed: it is not think and then code, but rather think and code. In other words, it will accelerate the move away from the ‘move fast and break things’ approach towards an approach of ‘make haste slowly’ (festina lente) (Floridi 2019a).

Finally, it would also mitigate a significant risk—posed by the current sporadic application of ethical-design tools and/or methods during different development stages—of the ethical principles having been written into the business and use-case, but coded out by the time a system gets to deployment.

To enable developers to embrace this vulnerable uncertainty, it will be important to promote the development of tools, like DotEveryone’s agile consequence scanning event (DotEveryone 2019), and the Responsible Double Diamond ‘R2D2’ (Peters and Calvo 2019) that prompt developers to reflect on the impacts (both direct and indirect) of the solutions they are developing on the ‘end user’, and on how these impacts can be altered by seemingly minor design decisions at each stage of development. In other words, ML developers should regularly:

  1. (a)

    look back and ask: ‘if I was abiding by ethical principles x in my design then, am I still now? (as encouraged by Wellcome Data Lab’s agile methodology (Mikhailov 2019); and

  2. (b)

    look forward and ask: ‘if I am abiding by ethical principles x in my design now, should I continue to do so? And how? By using foresight methodologies (Floridi and Strait Forthcoming; Taddeo and Floridi 2018), such as AI Now’s Algorithmic Impact Assessment Framework (Reisman et al. 2018).

Taking this approach recognises that, in a digital context, ethical principles are not simply either applied or not, but regularly re-applied or applied differently, or better, or ignored as algorithmic systems are developed, deployed, configured (Ananny and Crawford 2018) tested, revised and re-tuned (Arnold and Scheutz 2018).

This approach to applied ML ethics of regular reflection and application will heavily rely on (i) the creation of more tools—especially to fill the white spaces of the typology (for the reasons discussed in the previous section) and (ii) acceleration of tools’ maturity level from research labs into production environments. To achieve (i)–(ii), society needs to come together in communities comprised of multi-disciplinary researchers (Cath et al. 2017), including innovators, policymakers, citizens, developers and designers (Taddeo and Floridi 2018), to foster the development of: (1) common knowledge and understanding; and (2) a common goal to be achieved from the development of tools and methodologies for applied AI ethics (Durante 2010). These outputs will provide a reason, a mechanism, and a consensus to coordinate the efforts behind tool development. Ultimately, this will produce better results than the current approach, which allows a ‘thousand flowers to bloom’ but fails to create tools that fill in the gaps (this is a typical ‘intellectual market’ failure), and may encourage competition to produce preferable options. The opportunity that this presents is too great to be delayed, the ML research community should start collaborating now with a specific focus on:

  1. 1.

    the development of a common language;

  2. 2.

    the creation of tools that ensure people, as individuals, groups and societies, are given an equal and meaningful opportunity to participate in the design of algorithmic solutions at each stage of development;

  3. 3.

    the evaluation of the tools that are currently in existence so that what works, what can be improved, and what needs to be developed can be identified;

  4. 4.

    a commitment to reproducibility, openness, and sharing of knowledge and technical solutions (e.g. software), also in view of satisfying (2) and supporting (3);

  5. 5.

    the creation of ‘worked examples’ of how tools have been used to satisfy one of the principles at each stage of the development and how consistency was maintained throughout the use of different tools’

  6. 6.

    the evaluation and creation of pro-ethical business models and incentive structures that balance the costs and rewards of investing in ethical AI across society, also in view of supporting (2)–(4).

Limitations

All research projects have their limitations and this one is no exception. The first is that the research question ‘what tools and methods are available for ML developers to ‘apply’ ethics to each stage of the ML system design’ is very broad. This lack of specificity meant that the available literature was excessive and growing all the time, making compromises from the perspective of what is practically essential. It is certain that such compromises, for example which databases to search and the decision to restrict the tools reviewed to those that were not industry sector-specific, have resulted in us missing a large number of tools and methods that are publicly available. Building on this, it is again, very likely that there are a number of proprietary applied ethics tools and methods being developed by private companies for internal or consulting purposes that we will have missed, for example the ‘suite of customisable frameworks, tools and processes’ that make up consulting firm PWC’s “Responsible AI Toolkit” (PWC 2019).

The second limitation is related to the design of the typology itself. As (La Fors et al. 2019) attest, the “neat theoretical distinction between different stages of technological innovation does not always exist in practice, especially not in the development of big data technologies.” This implies that by categorising the tools by stage of development, we might be reducing their usability as developers in different contexts might follow a different pattern or feel as though it is ‘too late’ to, for example, engage in stakeholder engagement if they have reached the ‘build’ phase of their project, whereas the reality it is never too late.

Finally, the last limitations has already been mentioned and concerns the lack of clarity regarding how the tools and methods that have been identified will improve the governability of algorithmic systems. Exactly how to govern ML remains an open question, although it appears that there is a growing acceptance among tech workers (in the UK at least) that government regulation will be necessary (Miller and Coldicott 2019). The typology can at least be seen as a mechanism for facilitating co-regulation. Governments are increasingly setting standards and system requirements for ethical ML, but delegating the means for meeting these to the developers themselves (Clarke 2019)—the tools and methods of the typology can be seen as the means of providing evidence of compliance. In this way, the typology (and the tools and methods it contains within) help developers take responsibility for embedding ethics in the part of the development, deployment, and use of ML solutions that they control (Coeckelbergh 2012). The extent to which this makes a difference is yet to be determined.

Conclusion

The realisation that there is a need to embed ethical considerations into the design of computational, specifically algorithmic, artefacts is not new. Samuel (1960), Wiener (1961) and Turing were vocal about this in the 1940s and 1960s (Turilli 2008). However, as the complexity of algorithmic systems and our reliance on them increases (Cath et al. 2017), so too does the need to be critical (Floridi 2016a) AI governance (Cath 2018) and design solutions. It is possible to design things to be better (Floridi 2017), but this will require more coordinated and sophisticated approaches (Allen et al. 2000) to translating ethical principles into design protocols (Turilli 2007).

This call for increased coordination is necessary. The research has shown that there is an uneven distribution of effort across the ‘Applied AI Ethics’ typology. Furthermore, many of the tools included are relatively immature. This makes it difficult to assess the scope of their use (resulting in Arvan’s 2018 ‘moral-semantic trilemma’) and consequently hard to encourage their adoption by the practically-minded ML developers, especially when the competitive advantage of more ethically-aligned AI is not yet clear. Taking the time to complete any of the ‘exercises’ suggested by the methods reviewed, and investing in the development of new tools or methods that ‘complete the pipeline’, add additional work and costs to the research and development process. Such overheads may directly conflict with short-term, commercial incentives. Indeed, a full ethical approach to AI design, development, deployment, and use may represent a competitive disadvantage for any single ‘first mover’. The threat that this short-termism poses to the development of truly ethical ML is significant. Unless a longer-term and sector-wide perspective in terms of return on investment can be encouraged—so that mechanisms are developed to close the gap between what and how—the lack of guidance may (a) result in the costs of ethical mistakes outweighing the benefits of ethical successes; (b) undermine public acceptance of algorithmic systems, even to the point of a backlash (Cookson 2018); and (c) reduce adoption of algorithmic systems. Such a resultant lack of adoption could then turn into a loss of confidence from investors and research funders, and undermine AI research. Lack of incentives to develop AI ethically could turn into lack of interest in developing AI tout court. This would not be unprecedented. One only needs to recall the dramatic reduction in funding available for AI research following the 1973 publication of Artificial Intelligence: A General Survey (Lighthill 1973) and its criticism of the fact that AI research had not lived up to its over-hyped expectations.

It this were to happen today, the opportunity costs that would be incurred by society would be significant (Cookson 2018). The need for ‘AI Ethics’ has arisen from the fact that poorly designed AI systems can cause very significant harm. For example, predictive policing tools may lead to more people of colour being arrested, jailed or physically harmed by policy (Selbst 2017). Likewise, the potential benefits of pro-ethically designed AI systems are considerable. This is especially true in the field of AI for Social Good where various AI applications are making possible socially good outcomes that were once less easily achievable, unfeasible, or unaffordable ((Cowls et al. 2019). So, there is an urgent need to progress research in this area.

Constructive patience needs to be exercised, by society and by the ethical AI community, because such progress on the question of ‘how’ to meet the ‘what’ will not be quick, and there will definitely be mistakes along the way. The ML research community will have to accept this, trust that everyone is trying to meet the same end-goal, but also accept that it is unacceptable to delay any full commitment, when it is known how serious the consequences of doing nothing are. Only by accepting this can society by positive about the opportunities presented by AI to be seized, whilst remaining mindful of the potential costs to be avoided (Floridi et al. 2018).