Ensuring transparency in public affairs has been widely promoted, both by policy-makers and social scientists, as a method of increasing trust and perceived legitimacy among the public (see Hood 2006). This has led to a wide range of transparency innovations, from making records publicly available on the internet to broadcasting plenary meetings. Thus, it is not strange to assume, as those involved in the debate about AI in public decision-making have done, that rendering AI decision-making processes more transparent will increase the public’s trust in these processes and the decisions they lead up to.
According to common definitions, an organization or state of affairs has become transparent (or more transparent) when an actor (A) has made its workings and/or performances available (or more available) (B) to another actor (C). This can be done through various means (M). This definition is compatible with renowned definitions on transparency in the social sciences (e.g., Hood 2006; Grimmelikhuijsen 2012 for comparison) as well as those in the fields of AI and transparency (e.g., Turilli and Floridi 2009; Floridi et al. 2018). When a government (A) can make its source code available (B) to the public (C) so that they can see that nothing untoward is occurring in relation to their use of an algorithm to better predict the risk of recidivism in parole hearings, then the government has become more transparent in its processes.
Our definition of transparency arguably allows for a wide range of combinations of A, B, C, and M. Inspired by Mansbridge (2009), we argue that in relation to transparency in public decision-making, a distinction can be made between transparency that (1) informs C (e.g., the public) about final decisions or policies; (2) about the process resulting in the decisions (transparency in process); and (3) about the reasons on which the decision is based (transparency in rational). These forms of transparency should be understood as degrees rather than separate elements, as it is difficult to provide the reasons for a decision without making explicit what the decision is, in the same way that is difficult to present the process leading up to a decision without making the reasons on which it is based explicit. Thus, in most cases, the transparency of a process should be considered more transparent than the transparency of the reason, which, in turn, is more transparent than the transparency of the decision.
Intuitively, however, not all forms of transparency lead to greater perceived legitimacy. Take the classic comic segment from the show Little Britain, where a claimant is waiting for a decision from an official, and after the official has entered all the necessary information into her computer, she waits a moment, only to tell the claimant, “Computer says no.”Footnote 5 The reason why this is so comical is mostly due to its absurdity, as it radically clashes with our expectations regarding the type of answers we should receive from officials. We expect to be treated in a way in which we can rationally accept an adverse decision, and for this we need to know the bottom-line reasoning behind the decision. In other words, we expect some insight into the decision or a certain level of transparency. However, if the official at the computer screen, instead of merely saying “Computer says no” (i.e., making only the decision transparent), turned her screen to show the claimant a widely inscrutable algorithm, such as a decision forest, and claimed that she has now shown the claimant the whole process, the level of absurdity would only be accentuated. Hence, in this context, this form of transparency may be a non-starter when it comes to perceived legitimacy. This example shows that with AI decision-making and its perceived legitimacy, an important question to ask is not whether we should have transparency, but rather which kind of transparency should be applied.
In spite of the fact that not all forms of transparency may have positive effects on perceived legitimacy, some are in favor of full transparency (i.e., both transparency in rationale and transparency in process) (e.g., Hosanagar and Jair 2018; New and Castro 2018). Assuming that the discussion regarding the implementation of AI follows the logic of public debate on transparency in general, these voices are likely to grow stronger as AI techniques develop and become more widely implemented in society. With regard to the Little Britain case presented above, the proponents of full transparency could say that even though the claimant does not fully understand the algorithm on the screen, it should still be made available to them, because being respectful in this way (i.e., by hiding nothing) fosters perceived legitimacy.
We believe that if perceived legitimacy is the goal, we should opt for transparency in rationale and not transparency in process. By transparency in rationale, we refer to the public receiving information for the justification or explanation of a decision as well as details on who can be held accountable for said decision. Thus, our meaning of transparency in rationale is similar to that of Floridi et al. (2018: 699f) and their use of “explicability,” which implies that the public receives an explanation or justification for the decision made, a description of the process leading up to it, and an account of who is responsible for it. However, if explicability means that we actually make the decision-making processes fully transparent, then we do not believe it suitable in relation to the production of perceived legitimacy; but, if decision-makers should provide an explanation in the form of a narrative where it is explained how the decision has been made, then this might be applicable for perceived legitimacy.
Of course, the notion that AI assistants should be able to provide justifications or explanations for their decisions is not novel. In fact, it fits nicely with the core components of the rapidly evolving research field of explainable AI (XAI) (e.g., Gunning 2017, 2019; Thelisson et al. 2017).Footnote 6 In particular, Binns et al. (2018) have examined how different kinds of explanations affect the fairness judgments of the general public.Footnote 7 Likewise, corporations such as Google and Microsoft, as well as the Defense Advanced Research Projects Agency, are currently working toward XAI development.Footnote 8 Our discussion expands this line of reasoning by providing a more developed theoretical foundation for why explanations are critical and worthy of further exploration.
To appreciate what full transparency in AI and public decision-making would amount to, we propose dividing the entire decision-making process into three phases: Phase 1 is the goal-setting phase (goal-setting), Phase 2 is the coding phase (coding), and Phase 3 is the implementation phase (implementation) (see, e.g., de Laat: 529–533 for a comparison of AI in the market sphere and Boscoe 2019 for a similarly structured process in public decision-making). During Phase 1 (the goal-setting phase), decision-makers decide on the goals of the AI, how they should be weighed against each other when in conflict, and the features and data available to draw inferences from. For example, if you want your AI to choose which buildings you should give priority to when initiating a large renovation project, you may want it to have a feature that knows when each building was last renovated and which renovation process would be the most cost-effective to begin with. Of course, these features might pull in different directions, meaning that you will have to give the values different weights to guide the AI on what to do when said features are in conflict. These decisions are often highly political, as they require decision-makers to explicitly distinguish between advantages and disadvantages at a high level of precision.
In Phase 2 (the coding phase), the AI is developed and worked on to ensure it meets the necessary standards. This is often a point of introduction for problems related to bugs and biases, and is well described in the literature (e.g., Sweeney 2013; Datta et al. 2015; O’Neil 2016; Boscoe 2019). In this phase, it is discussed what the accuracy rates are, what they should be, how these and other performance metrics differ, how they should be allowed to differ across different subpopulations (when deciding about groups or individuals), what data to use when training the algorithm, and how to clean it. With public decision-making, the main challenge is ensuring that the AIs are good enough when it comes to these issues. Of course, it may be difficult or even impossible for decision-makers to know for sure whether the AIs they have authorized are up to standard without relying on programmers. This is not a problem restricted to AI decision-making, since decision-makers rely on expert opinions in virtually all policy areas. However, the problem may be accentuated in AI decision-making, as few political representatives are trained in code reading or programming. Furthermore, it may be difficult to establish goals and guarantee that their respective importance is sufficiently precise for programmers’ needs.
In Phase 3 (the implementation phase), the AI is applied in the public decision-making processes, and the results produced by the AI are used in actual decision-making. This can be done by having the AI make the decision by itself or, more plausibly, by having an individual formally make the decision based on the results or recommendations of the AI. Naturally, this phase is often the one to which researchers refer when discussing AI and transparency. This phase will also, of course, feed back into Phases 1 and 2. For example, when AI assistants have been implemented in real-world settings, they are sometimes found to be discriminatory in an unintentional way and hence in need of modifications. Similarly, if left unsupervised, they might develop “bad habits” that alter the intentions of the decision-makers. Furthermore, ideological shifts among the decision-makers might require changes in goals and prioritizations, and to further complicate things, these changes among the decision-makers might be sparked by their deeper understanding of what the realization of the goals of the AI assistant would imply. Thus, there is a constant intermingling between Phases 1–3, with all phases deeply connected to each other.Footnote 9
To make the process of Phase 1 (goal-setting) and 2 (coding) fully transparent to the public (C), the decision-makers (A) need to make the deliberations about goals and prioritizations as well as the deliberations of the programmers (B) available to the public, along with the training data, the testing data etc. In Phase 3, when the AI is implemented in the decision-making process, the source code and records regarding how the AI is used in the decision-making process need to be made publicly available. Ensuring transparency of the reasons that decisions are based on means that decision-makers should provide justifications for their decisions. This can be done in each phase (see Fig. 1). Transparency regarding the reasons will presumably contain an attempt to justify the overall functionality of the AI (e.g., its goals, their weights, its methods in different situations) (e.g., Boscoe 2019).
In the remainder of the paper, we will argue that decision-makers should in general opt for the more limited form of transparency (i.e., transparency regarding the reasons that the decision is based on, rather than transparency about the decision-making process).