AI, Opacity, and Personal Autonomy

Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings (Feller et al. 2016), medical diagnoses (Rajkomar et al. 2018; Esteva et al. 2019) and recruitment (Heilweil 2019, Van Esch et al. 2019). Academic articles (Floridi et al. 2018), policy texts (HLEG 2019), and popularizing books (O'Neill 2016, Eubanks 2018) alike warn that such algorithms tend to be _opaque_: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation (Lombrozo 2011, Hitchcock 2012), I formulate a moral concern for opaque algorithms that is yet to receive a systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges.

traced by human beings." (HLEG, 2019, p. 18) and the recent European Commission's AI act proposes that "a certain degree of transparency should be required for high-risk AI systems" (Council of the European Union, 2021b, p. 30). Finally, O'Neill (2016, p. 31) lists opacity, damage and scale as the three essential features of algorithms that qualify as 'weapons of math destruction'. There appears to be broad agreement that transparency and opacity carry substantial moral weight.
This idea certainly has intuitive appeal. Receiving decisions without explanations can be frustrating and scary (cf. O'Neill, 2016, p. 29). Empirical research confirms that our trust in decisions, actions and outcomes increase when we are provided with a plausible explanation (e.g. Herlocker et al., 2000;Symeonidis et al., 2009;Holzinger et al., 2020), 6 and earlier research in psychology suggests that there is a distinct pleasure associated with grasping explanations (Gopnik, 1998). Transparency comes with certain practical advantages as well. Our ability to assess the reliability and fairness of algorithms improves when we grasp the explanations for their outcomes (Kim et al., 2016;Gkatzia et al., 2016;Biran and McKeown, 2017;Doshi-Velez et al., 2017) and improving an algorithm is probably easier when one knows how it works (cf. HLEG, 2019, p. 18). 7 By contrast, opacity often compounds the negative impact of inadequate outcomes. For example, it is harder to question the result of a process if one cannot point the finger at where it went wrong. Consequently, it is harder to negotiate the outcomes of opaque algorithms; their decisions are more likely to go unchallenged when mistaken and those responsible are less likely to be held to account (cf. O'Neill, 2016;Floridi et al., 2018;Walmsley, 2020).
Even so, the call for transparency is not without its critics. Zerilli et al. (2019) maintain that non-AI decision algorithms, such as humans or bureaucratic systems, can be equally complex and mysterious in their actual implementation. We rely on judges and committees to make decisions without having any deeper understanding of how their brain functions. 8 London (2019) emphasizes that common medical interventions often involve mechanisms that are not 6 Although there are definitely other factors in play as well. See for example Krügel et al. (2022) and Jauernig et al. (2022). 7 Though see Weller (2019, Sect. 3.1) for examples suggesting that transparency can lead us to overascribe reliability. 8 Lipton (2018) and Cappelen and Dever (2021) make similar points in passing.
outcomes. I cannot address his concerns here, but see Erasmus et al. (2020); Erasmus and Brunet (2022) for a response. See also Frigg and Reiss (2009) for arguments in favour of non-exceptionalism about computer-run algorithms and simulations. 11   also focus on the importance of counterfactuals for transparency, but eschew overtly causal interpretations of these counterfactuals. 12 See Strevens (2013) for a more detailed account of grasping causal explanations. For our purposes, it is important that understanding how X and Y correlate does not require full understanding of the mechanism that Second, it is worth emphasizing that such accounts of causal explanation are extremely non-committal about the physical or technical implementation of such counterfactual patterns.
For example, as long the right pattern of counterfactual dependence holds between the quality of reference letters and the probability of getting the job, it does not matter how the importance of good reference letters is encoded in the system. In fact, it might very well be that there is no explicit mention of reference letters and what makes them convincing in the code of the algorithm. For many current AI algorithms, it is exceedingly likely that such information is encoded implicitly rather than explicitly (cf. Burrell, 2016, p. 8-10). According to the broadly counterfactual accounts of causation, being merely implicitly encoded is not an obstacle for being causally effective. As we shall see in Section 4, this feature will make such accounts quite suitable for providing explanations of AI behaviour at the right level.
A third point worth elaborating on is that opacity and transparency are matters of degree.
One might understand how some outcomes of an AI system are produced, but not how others are. Similarly, one might know some of the causal factors contributing to outcome X without knowing all of them. Crucially, not all causal explanations will help us understand the outcome, nor will all causal explanations be of interest to us. For example, an overly inclusive causal explanation might confuse us, and thus provide us with no more grasp of how the outcome was produced. On the other end of the spectrum, the explanation 'your submitting an application caused you to get a rejection letter' will be of no interest because it provides too little information. As we shall see in Section 4, getting clear on the relation between transparency and autonomy can help us get clear on the degree of transparency required in a given situation, as well as on how to compare the relevance of different causal explanations. occur in the future. We are often interested in exactly such hypothetical causal explanations.
A future applicant is well-advised to ask 'if I were to apply, which factors will play a causal role in the outcome of my application?'. As will become apparent in the examples to follow, the right degree of transparency will often require that we grasp such hypothetical causal explanations. Our account of transparency focuses on explainability, but this does not restrict us to explainability after the fact.

Causal explanation and autonomy
Starting from our account, the demand for transparency translates into a demand for causal explanations. We can now ask what the value of causal explanation is: why do we or should we want causal explanations of AI algorithm outcomes? Recent work on causal explanation points toward an intuitive answer. Philosophical and empirical research has converged on the thesis that we are particularly interested in causal explanations because they provide us with reliable means to affect and predict our surroundings (cf. Lombrozo, 2011;Hitchcock, 2012). For example, knowing that my subpar references caused me to not get the job allows me to predict that similar jobs will be unavailable to me unless I fix my references. It also provides me with an effective strategy to improve my chances of getting a similar job: improving my references.
In effect, opaque decision algorithms hide effective strategies of affecting and predicting their outcomes from the affected parties. Conversely, transparent decision algorithms can enable us to undertake action and affect future outcomes.
In the XAI literature, this action-enabling potential of transparent decision algorithms often goes unmentioned. For example, a recent meta-study of over 100 texts on XAI by Langer et al.
(2021), lists twenty-eight desiderata found in the literature but makes no mention of how opacity can undermine the ability of affected parties to effectively influence the outcomes according to their goals. 13 Similar remarks apply to a recent review of thirty-four AI ethics documents published by civil society, the private sector, governments, intergovernmental organizations, and multi-stakeholder organizations by (Fjeld et al., 2020). 14  Second, the autonomy worry comes with significant backing from moral philosophy. Third, undermining autonomy undermines responsibility. Let us discuss these points in turn.
First of all, the autonomy worry hones in on a feature that is tightly related to the opacity of decision algorithms. In principle, an opaque decision algorithm could be sufficiently reliable, fair, and trusted. Trust can be due to the recommendation of a trusted authority, and reliability and debiasing are eventually just a question of tweaking the algorithm. Certainly, tweaking the algorithm will be harder when we don't have the slightest idea how it works, and in any real-life scenario the programmers would require a minimum of transparency to get going, but there is nothing that in principle stands in the way of an algorithm fulfilling its function perfectly without the relevant stakeholders having knowledge of what produces its outcomes. The outcomes could even be treated as negotiable by allowing users to double-check the outcomes using another decision algorithm, be it a human or an artificial one. 16 Such double-checking action-enabling potential 'dominates' the XAI literature (p. 1120). They refer to , who indeed dedicate a section to this topic, but further evidence of such dominance is scant. 15 Though one would hope that we drive carefully for other reasons as well. 16 Wachter et al. (2017, p. 98) make a similar suggestion to address cases where the demand for transparency conflicts with trade secrets.
can also be used as a basis for holding those who deployed or developed the algorithm accountable. By contrast, our lack of knowledge of how to affect the outcomes is integral to an AI system being opaque to us. According to the causal account proposed above, opacity and a lack of knowledge about how to manipulate are definitionally inseparable.
The upshot is that even if an opaque algorithm manages to tick all the other boxes, it can still threaten our autonomy. Suppose for example that all applications for government jobs go through the GOV-1 decision algorithm. For the purpose of our example, it matters little how GOV-1 was trained, but we can suppose that GOV-1 is the most reliable system to select government job candidates. All GOV-1 requires to decide who is the right candidate for the job is an accurately filled out questionnaire for each applicant. Careful analysis has demonstrated beyond reasonable doubt that no team of humans or competing artificial algorithms (or any combination thereof) will select a more viable candidate for any government position than GOV-1. We can assume further that applicants have a right of redress by demanding their applications be considered by another (non-AI) system, the government is held accountable for any mistakes by GOV-1 and we can assume trust in the government is strong enough to engender trust in GOV-1 as well. Unfortunately, it is unclear how GOV-1 weighs the information provided in the questionnaire. As a prospective applicant, I have no idea which competences I should acquire in order to become desirable, or even eligible, for a government job. 17 The opacity of GOV-1 thus hides salient ways of shaping our lives. Perhaps my goal of becoming a government employee requires me to focus more on getting into an international exchange program than on getting high grades. A second reason for focusing on opacity's threat to autonomy is that the moral import of autonomy is well-discussed in the philosophical literature. Autonomy takes center stage in several ethical theories. In areas varying from foundational deontology (e.g. Kant, 1993) and utilitarianism (e.g. Mill, 1999), to more applied work on education (e.g. Haji and Cuypers, 2008;Thorburn, 2014), bioethics (e.g. MacKay and Robinson, 2016), political theory (e.g. Raz, 1986), and free will (e.g. Ismael, 2016) authors agree that autonomy carries moral weight. The importance of autonomy is picked up in other discussions within AI ethics (e.g. Kim et al., 2021), and legal theory as well (e.g., Marshall, 2008;McLean, 2009). The call for transparency can draw support from all of these fields.
This appreciation of autonomy's moral importance gave rise to many accounts of what it precisely consists in. It is a further question whether our use of 'autonomy' here overlaps or coincides with how it is standardly used in the literature. A detailed comparison will unfortunately have to wait for some other occasion, but here are two remarks on the subject. First, autonomy is taken to be some brand of self-determination and there is growing attention for the fact that self-determination requires a long-term control over one's life and plans (e.g. MacIntyre, 1983;Raz, 1986;Atkins, 2000;Bratman, 2000Bratman, , 2018Ismael, 2016). Second, even authors whose accounts would not appear to mesh well with our usage here acknowledge that cases like GOV-1 need to be taken into account. For example, Christman (1991) defends a broadly internalist view of autonomy, according to which our desires should fit our rational beliefs and the rationality of a belief does not require its being true. So in principle, prospective applicants can rationally believe that studying political science rather than physics will help their chances, even if it does not. Even so, Christman accepts that rationality requires a somewhat reliable connection to reality: One is autonomous if one comes to have one's desires and beliefs in a manner which one accepts. If one desires a state of affairs by virtue of a belief which is not only false but is the result of distorted information given to one by some conniving manipulator, one is not autonomous just in case one views such conditions of belief formation as unacceptable (subject to the other conditions I discuss) (Christman, 1991, p. 16).
Generally speaking, it would be surprising if no lack of information could undermine our autonomy. Based on these two observations, one can reasonably expect that autonomy as the ability to shape one's life will correspond sufficiently with common usage in ethics.
A third reason to focus on the threat to autonomy is that autonomy strongly correlates with responsibility. Generally speaking, undermining an agent's autonomy relative to an outcome undermines responsibility relative to that outcome as well. This is because responsibility for an outcome requires a reliable causal connection between the agent's intention to reach or avoid the outcome and the outcome in fact being reached or avoided (cf. Björnsson andPersson, 2012, 2013;Grinfeld et al., 2020;Usher, 2020). If candidate A intends to get a government job, but has no idea how they should polish their competences in order to qualify, the reliability of the correlation between her intending so and her achieving her goal should be expected to decrease. Generally speaking, not knowing how to achieve a goal makes it less likely that you achieve it. When opaque algorithms undermine our autonomy, they also undermine our responsibility for the outcome. 18 In conclusion, it appears that opacity can undermine autonomy and autonomy has moral value. Even if we set aside the above-mentioned connection with trust, fairness, reliability, accountability, negotiability, and a primitive preference for explanation, demands for transparency can still be grounded in a requirement to respect personal autonomy.

Transparency in practice
Grounding the demand for transparency in autonomy requirements sheds new light on some familiar issues in XAI. I elaborate on three of these here: (i) the question how much transparency is desirable and whether this degree is technically attainable, (ii) how to entrench the demand for transparency legally, and (iii) whether transparency conflicts with other desiderata decision procedures. Let us discuss these in turn.

Delivering the right degree of transparency
The connection between transparency and autonomy provides guidance in deciding how transparent decision algorithms ought to be. We want to know how differences in the input correlate with differences in the output. It is well-received in both the philosophical and the computer science literature that knowledge of such higher-level regularities does not presuppose knowledge of the finer details of the system (e.g. Dennett, 1971Dennett, , 1991Newell, 1982;Campbell, 2008). Establishing which level of explanation is appropriate for the relevant input-output correlation will no doubt be an arduous task that requires different strategies for different cases, but there is a general point to be made here as well. In order to increase or maintain our autonomy, we want to know which changes in the input robustly correlate with certain changes in the output. Some patterns of correlation will be too fragile to be of genuine interest. Perhaps having a twitter handle without numerals increases your chances with 0.02 percent if you are a Caucasian woman with a law degree from a foreign university, but will have no effects in any other circumstances. Other patterns will be crucial knowledge for future applicants. Perhaps the only way to get a government job without a university degree is when you score above 150 on the IQ test and can demonstrate a staunch unwillingness to believe conspiracy theories. That is to say, in most circumstances, a university degree is a condition sine qua non. Such robust correlations are worth knowing.
It is unlikely that knowledge of such robust correlations will require full physical details or design details, but it is also unlikely that the required explanation will always be found at the intentional level. This is not only because AI systems might typically lack such a level altogether (cf. supra), but also because the most robust patterns might not be found at this level. To see this, consider the human case of implicit bias. While such biases are not typically implemented at the intentional level, they can still make for robust correlations between certain input features and outputs. For example, committees with racist implicit biases might systematically review candidates with foreign-sounding names unfavourably, without those biases being manifested at the intentional level (cf. Bertrand and Mullainathan, 2004). The upshot is that there is no fixed level that provides the right level of transparency. Instead, we should focus on finding those correlations that are robust across the scenarios in which the algorithm is to be applied.
The good news is that such input-output difference-making transparency is easier to achieve than full physical, design or algorithmic transparency. Full physical transparency would require a grasp of the workings of the system in all its physical details down to the electrons making up the hardware. Achieving such transparency is of course very difficult, but also not very useful. Full algorithmic transparency requires a grasp on the mathematical details of the algorithms encoded in the AI system and design transparency requires an engineering perspective on how such a system can be developed. Attaining either of these is taken to be extraordinarily difficult as well, and demanding it might even be in tension with copyright laws (cf. attaining input-ouput transparency requires less work. All the glass box should do is report the correlations between differences in inputs and differences in output. As the glass box is 'built around' the AI system, this method would not require us to 'open up' the algorithm that is being tested. Standard causal extraction algorithms, such as those developed by Pearl (2000) and Spirtes et al. (2000), can be used to acquire causal information on the basis of the correlational data gathered via glass-boxing. As Woodward (2003) bases his account of causal explanation on the structural equation models provided by Pearl (2000) and Spirtes et al. (2000), the causal account of transparency and opacity we proposed in Section 3 naturally fits this technical approach. 19  Undivided optimism about such 'forensic' approaches would be premature. First of all, these approaches all omit details about the actual process leading up to the outcome when providing potential explanations. Such neglect of detail is necessary to provide explanations that are understandable for human agents, and, so the worry goes, there is a real risk that the omitted details are in fact crucial to the true causal story of how the decision was in fact produced (cf., Rudin, 2019). In the worst case, this lack of detail may make for mistaken explanations altogether, taking mere correlation for causation. While this is a real risk, it is worth noting that this risk is by no means unique to complex algorithms. It is well-recognized that the explanations of any event will require us to omit enormous amounts of details that, strictly speaking, contributed to its coming about (e.g. Loewer, 2007;Ney, 2009 There are ongoing attempts to make provide explanations of black box outcomes without 'opening' black boxes. In order for these 'forensic' strategies to safeguard autonomy, they need to reliably provide accurate explanations that fit the needs and practical perspective of the affected parties.

Legally entrenching transparency demands
Transparency demands play a central role in recent attempts to regulate the use of AI. However, the current formulations of these demands will not suffice to respect the autonomy of affected There are two salient challenges with legally entrenching a right to explanation that will sufficiently respect the autonomy of the affected parties. First of all, if the AI algorithms in use are under protection of proprietary laws, the developers and deployers can maintain that any demand to disclose the workings of the algorithms conflicts with their right to intellectual property. True, the 'forensic' tools discussed above could allow for the relevant causal knowledge without detailed knowledge of the algorithmic implementations. But making the case that the relevant causal knowledge can be acquired without the detailed knowledge that is defended by proprietary laws is likely to be an uphill battle. If for each demand for an explanation it has to be shown that the required explanation would not be in conflict with trade secrets, the affected parties are likely to find themselves de facto disenfranchised with regards to their right to explanation. Wachter et al. (2017, p. 98) make note of this challenge and suggest that external auditing mechanisms can be employed in such cases, but while such mechanisms could help to enforce the right of redress and to justify accusations of bias, they would not provide explanations, and thus not provide affected parties with action-enabling information.
Second, our previous discussion on finding the right level of explanation further complicates the picture. Any outcome will have many explanations that are of no practical use to the affected parties. For example, one causal explanation of why I did not get a government job might be that my application activated node 64879508, which activated node 0540324875, but inhibited node 45009783245. And, as it happens, such an activation pattern provides a negative outcome. Without any background knowledge about the role of these nodes within the system, this explanation is of no help to me, even if it is factual. The upshot is that the right to an to an explanation can be made to go through, this further dimension needs to be kept in mind.
These difficulties with entrenching an adequate right to explanation transpire in recent attempts to regulate AI use.  point out that the GDPR fails to provide affected parties with a right to explanation of individual decisions, but instead delivers a rather vague right to be informed of (i) whether an AI is used, and (ii) what 'logic' underlies the algorithm. Moreover, even this watered down right appears to apply only when the decision is based solely on automated algorithms (Art.22 (1)). Which means that even a minimal involvement of a human agent in the process can relieve the users of any obligation to provide information about the algorithm (cf., Wachter et al., 2017, p. 78). I refer the reader to the original text for a detailed discussion of their evidence. Suffice it to say that the kinds of explanations that can sustain our ability to effectively pursue our life plans would not be protected by the GDPR alone.
More recently, the European Commission's AI act proposes to impose transparency requirements on precisely those algorithms with outcomes that have significant impacts on the affected parties, such as decisions on legal status, access to education or contractual relations (Annex III). 21 However, as argued by Fink (2021), the proposal, if accepted, would only de-21 Note though, that the demands on AI decisions in 'non-high risk' contexts are limited to mere guidelines that are adopted on a voluntary basis. This means that affected parties must be able to convincingly argue that the decisions affecting them count as 'high risk'. This risks putting an undue burden of proof on victims who do not have the means to litigate in grey area cases. mand transparency towards the users of AI, and not towards the affected parties. 22 The only obligation towards the affected parties would be to inform them of the fact that an AI decision algorithm has been used to reach a conclusion. The AI act thus does little to legally entrench our autonomy in the face of decision algorithms that are entirely opaque to us.
There is some hope that a right to an adequate explanation of automatized decisions can be derived from more general rights. Fink (2021)  of the charter provides only a limited right to explanation.
In light of the precarious position of the right to explanation in current legislation, the link between transparency and autonomy can perhaps be of help. There are at least some promising signs to be found here, as the notion of personal autonomy plays a central role in a variety of legal frameworks. For example, the European Court for Human Rights has relied on the notion of personal autonomy in several rulings, 24 and some legal scholars maintain that the very right to personal autonomy is enshrined in the European Convention of Human Rights (ECHR) (e.g. Marshall, 2008)  However, there are significant obstacles on this route towards a right to explanation as well.
Neither the US Constitution or the ECHR explicitly mention autonomy. Instead, arguments for the right to autonomy based on these texts tend to go via related notions, such as dignity (ECHR, Art.1) and privacy (ECHR, Art.2). Consequently, the jurisprudence at the ECHR is equivocal on how much weight is to be attached to autonomy. Some judges rely on autonomy as a guiding notion to interpret these foundational legal texts, whereas others take the right to autonomy to follow from texts themselves. 25 It certainly would not hurt to have more concrete formulations of the importance of autonomy and a legal right to explanation that is adapted to the current rise of automated decision-making available in our legal frameworks.
While the call for transparency has inspired the notion to play a central role in legal documents focusing on AI, these documents do not guarantee transparency towards affected parties whose lives are affected by the outcomes of automatized decision procedures. The link between transparency and autonomy provides a promising extra tool for legally entrenching a right to explanation. Even so, there is still plenty of legislative work to be done before we can feel safe that such a right is legally protected.

Downsides of transparency
Even if transparency has a distinct moral value in serving autonomy, it should not be pursued at all costs. There may be any number of reasons to trade off transparency for other goods. I focus here on three potential drawbacks of providing the degree of transparency that is required to bolster autonomy.
First, providing information of the robust correlations that allow us to affect the outcomes of decision algorithms might increase the advantage of those who have easier access to the difference-making features. For example, if attending expensive private universities increases one's chances of getting a government job, it might be overall justifiable to not divulge this fact. As it becomes easier to control the outcomes in fair ways, it will become easier to control the system in unfair ways as well. Implementing transparency requirements will require 25 See Koffeman (2010, p. 8-9) for discussion.
Second, it is a well-received truism that otherwise reliable indicators can become unreliable once it is publicly known that they are used as indicators (Campbell, 1979;Goodhart, 1984).
For example, if word gets out that GOV-1 treats participation in debate club as a big plus, this might cause students to attend debate club merely to improve their chances at a government job, rather than to develop the relevant skills that debate club is supposed to foster, like absorbing and structuring information, and presenting clear arguments.  (2002), Nguyen (forthcoming) recently argued that transparency can improperly limit the kinds of reasons that feature in decision-making. Forcing experts to make their reasons for certain judgments accessible and understandable for nonexperts, risks forcing them to limit their reasons to the kind of reasons that non-experts are sensitive to. This would in effect make it impossible to invoke reasons that require advanced expertise to appreciate. While Nguyen does not focus on automatized decision-making, his arguments appear to transfer quite easily to the automatized case. In fact, one of the oft-cited reasons for using automated decision algorithms is that they appear to discover patterns that are not readily appreciable by human observers. If we only use transparent algorithms, we might come to see that some unexpected or simply intractable reasons feature in the production of the decision. In the long run, this might incite us to use less reliable mechanisms that only employ reasons and inferences that are tractable to us.
So even though transparency can help bolster autonomy, this does not mean that it is to be pursued at all costs. Relatedly, it is worth emphasizing that the argument presented here does not establish that transparency or autonomy are intrinsically valuable. The argument I have provided in favour of transparency is clearly restricted to its instrumental value. I have argued that transparency carries moral weight as a means for supporting user autonomy. The argument leaves it open whether or not (i) transparency is intrinsically valuable, (ii) autonomy is intrinsically valuable, or (iii) transparency has instrumental value beyond its contribution to autonomy. Colaner (2021) recently argued in favour of (i), and many of the works referenced throughout this text provide evidence that (ii) and (iii) are true as well. Even so, our central argument goes through even if (i)-(iii) turns out to be false: opaque decision algorithms can undermine our autonomy by hiding salient pathways of affecting their outcomes. If the broad consensus that autonomy carries moral weight is correct, this means transparency is worth demanding.

Conclusion
There are several reasons to demand that impactful decision algorithms are transparent. This is true regardless of whether they are implemented by humans or AI systems. Previous research indicated that transparency is conducive to negotiability and accountability, helps fine-tune reliability and avoid bias, and that we are more likely to trust the results of algorithms if we understand what causes their outcomes. Building on recent work about the value of explanation, I have argued that a lack of transparency can also undermine the autonomy of the affected parties. In both the human and the AI case, we are interested in knowing the robust patterns of correlation that allow us to reliably affect and predict their outcomes. Such knowledge can play an integral part in planning and shaping our lives as rational, self-determining agents.
This perspective on transparency furnishes XAI debates with both new tools and new challenges. On the one hand, calls for transparency can draw on established work in moral philosophy and legal texts that emphasizes the importance of personal autonomy. Moreover, focusing on the autonomy of affected parties can guide us in deciding what kinds of explanations we should demand. On the other hand, it appears that resolving previous concerns about opacity will not automatically address the threat to the autonomy of the affected parties, and the kinds of explanations required to respect our autonomy can be hard to come by. It might not require us to open the black box, but it does require us to take into account the perspectives of rational planning agents with life-goals and dreams.