Abstract
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking.
Similar content being viewed by others
Notes
Note that a number of commentators believe the story makes too close a connection between predictive analytics and pregnancy-related advertising. There are reasons to send such advertising to people who are not pregnant, the advertising may have been based on a criteria unrelated to pregnancy, and others (Harford 2014).
Antony Duff calls this “prospective”responsibility (Duff 1998). Here we should note that we are only discussing morally justifiable roles, where the holders of role responsibility are themselves moral agents. Hence, being assigned a role within a criminal organization, or being assigned a role when one lacks the capacity to act morally, cannot confer role responsibility in the required sense.
For the sake of simplicity, we will refer to ‘actions’ in discussing responsibility. However, our account extends to omissions and dispositions. Note, too, that causal responsibility is complicated in over-determination cases. But those cases don’t affect our analysis here.
This, of course, may not absolve the captain completely. See Fischer and Ravizza 2000, 49–51 for an explanation of “tracing” responsibility to prior actions.
There remain some controversial issues, including for example Frankfurt-style cases in which one may be responsible or not regardless of whether she does or does not know how her actions will be causally effective. But the issues in those cases turn on the link between causal responsibility and the ability to do otherwise. That does not affect our arguments.
Within this group of views, there is substantial debate about whether person X is responsible for Y in virtue of Y being attributable to X, of X being answerable for Y, or of X being accountable for Y. Scanlon’s view focuses on attributability (Scanlon 2008). Shoemaker distinguishes between attributability, answerability, and accountability (Shoemaker 2011). Smith (like Shoemaker) distinguishes a thing being attributable to a person and that person being responsible for it; however, she views accountability as a species of answerability. What is important for our purposes is that each of the views in this constellation recognizes that the content of responsibility claims is that responsible agents are those for whom it is appropriate, or for whom it ought to be the case, that they provide an account of their intentions, interests, and reasons for an action.
Fischer and Ravizza provide an accountability view that bridges Strawson’s attention to the social function of holding others responsibility by way of reactive attitudes and accountability views’ attention to reasons. Specifically, they maintain that an agent is responsible if she is an apt target of reactive attitudes. More important here, though, is that being morally responsible for actions requires that agents exercise “guidance control.” That requires that agents be at least weakly reasons-responsive, which is to say that where the agent has access to strong reasons in favor or against an action, she will act in accordance with those reasons. It also requires that the source of actions be the agent, which is to say that the reason-responsiveness is internal to the agent (Fischer and Ravizza 2000, 31–41).
18 U.S. Code § 1956 - Laundering of monetary instruments.
Other aspects of money laundering are about concealing identities of agents, for example by routing illicit money through shell corporations and bank accounts in permissive jurisdictions.
Two other accounts addressing causal and moral responsibility in the computing context are worth noting here. First, Daniel Dennett (1997) posits that machines may be credited with (i.e., responsible for) some tasks (e.g., Deep Blue beating Kasparov) but cannot be responsible for others (e.g., murdering Kasparov). We would argue that this difference tracks the causal/moral responsibility distinction, though that is not Dennett’s claim.
Helen Nissenbaum (1994) argues that the increased use of computing systems poses a threat to accountability, based on four key barriers. These include the problem of many hands, the existence of bugs that cause computing failures, the ability to use computers as scapegoats, and the separation of system ownership from legal liability for problems. In doing so she notes that distributed causal responsibility can function to obscure responsibility and blameworthiness (p. 74). Our view of laundering can apply to each of the barriers she discusses, but does not depend on any of them. Consider the example of “blaming the computer,” or pointing to the computer as the sole source of causal responsibility. That considered by itself would not seem to be a case of laundering, but instead just a straightforward denial of responsibility. If instead, it included a process by which a party ensures the computer has causal responsibility, ascribes morally relevant qualities to the computer’s actions, obscures the party’s causal responsibility, and in so doing fails to adequately account for events for which the party is morally responsible, it could be laundering. In other words, merely blaming something else does not rise to laundering. Laundering is, we take it, more insidious in that it forestalls others’ abilities to demand an account of actions within domains of their legitimate concern.
Thanks to an anonymous reviewer for pointing us to these articles.
Note that agency laundering does not require that one infringe one’s substantive role responsibilities (except to the extent that one’s role responsibility includes being transparent about one’s causal responsibility). In Chair, for example, it’s plausible that the chair was fulfilling her role responsibilities with respect to the department’s curriculum. We return to this point in section 4.
Note that Facebook has recently taken measures aimed at reducing discriminatory advertising (Levin 2019).
Note that this is a possible moral claim that one might make about Facebook and other media organizations. This is a distinct question from what kinds of legal rights and obligations information intermediaries have in light of (inter alia) the United States Communications Decency Act (see 47 U.S.C. section 230), the European Union’s eCommerce Directive, articles 12–16 (Council Directive 2000/31, 2000 O.J. (L 178) 1 (EC)), and the EU’s General Data Protection Regulation (GDPR) (Commission Regulation 2016/679, 2016 O.J. (L 119) 1 (EU)). See, e.g., Keller (2018)
See, e.g., (Facebook 2018)
One can frame this as a question of capacity responsibility. That is, if Facebook did not have epistemic access to the relevant information about the possibility of misuse, it would not have the necessary capacity to be morally responsibility. Note here that epistemic access is not limited to actual knowledge, but the ability to garner it under reasonable conditions. Hence, Facebook’s moral responsibility will turn on the degree to which it could reasonably have known about potential for misuse. And that would define its degree of agency laundering.
One further complicating issue is mitigation. Facebook or another social media company might use its suggestion system to better understand relations among (for example) racists or purveyors of disinformation to promote anti-racist or epistemically sound information. The degree to which that would mitigate or deepen laundering is a question beyond what we can cover here. Thanks to an anonymous reviewer for making this point.
One tool, named “Greyball,” was developed to surreptitiously ban users who Uber believed were violating the company’s terms of service. Uber eventually used Greyball to surreptitiously ban people Uber believed to be government regulators investigating whether Uber was operating illegally. See (Isaac 2017).
Note that Loomis demonstrates another way one can launder even while fulfilling one’s substantive role responsibilities. Imagine that the trial court had deliberated about its decision, but did not explain its reasoning for the sentence. Suppose instead it merely wrote that it agreed with the COMPAS report’s assessment with no further comment. That would obscure the scope of the court’s causal responsibility and would fail to provide an adequate account of the decision. But in that case, the court would not have violated some other substantive role responsibility.
Thanks to a reviewer for pointing out these possibilities and noting their similarities to “Chair” and “Democratic Chair.”
Thanks to Suresh Venkatasubramanian for pointing us to this talk.
We appreciate an anonymous reviewer raising the question of whether there are cases where one maintains agency but launders accountability instead. Our sense is that any such case would involve minimizing one’s agency. In other words, accountability is the thing that is avoided, and one avoids it by laundering the degree to which one is (morally) responsible, which is in turn a function of a person’s agency in a process. Likewise, money laundering is a way to forestall accountability, and it is the laundering of some other thing (viz., money) that helps avoid the accountability.
Thanks to an anonymous reviewer for pressing us on this point.
Note that there may be other, non-laundering moral wrongs involved in goal prompts and queuing, as discussed in section 5.
Regulation (EU) 2016/679, of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119).
References
“Defendant’s Original Answer and Defenses, Houston Federation of Teachers, Local 2415 v. Houston Independent School District (S.D. Tex.).” n.d.
American Statistical Society (2014) ASA statement on using value-added models for educational assessment. https://www.amstat.org/asa/files/pdfs/POLASAVAM-Statement.pdf. Accessed 14 July 2018
Angwin J, Larson J (2016) Machine Bias. ProPublica, May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 1 March 2019
Angwin J, Scheiber N, Tobin A (2017) Dozens of companies are using Facebook to exclude older works from job ads. ProPublica, December 20 https://www.propublica.org/article/facebook-ads-age-discrimination-targeting. Accessed 2 March 2019
Angwin J, Varner M, Tobin A (2017) Facebook enabled advertisers to reach ‘Jew haters,’ ProPublica, September 14. https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters. Accessed 2 March 2019
Barocas S, Selbst AD (2016) Big Data’s Disparate Impact Essay. California Law Review 104:671–732
Binns R (2017) Fairness in machine learning: lessons from political philosophy. In: ArXiv:1712.03586 [Cs], December 10. http://arxiv.org/abs/1712.03586. Accessed 1 August 2019
Binns R (2018) Algorithmic accountability and public reason. Philosophy & Technology 31(4):543–556. https://doi.org/10.1007/s13347-017-0263-5
Calo R, Rosenblat A (2017) The taking economy: Uber, information, and power. Columbia Law Review 117(6):1623–1690
Ceglowski M (2016) The moral economy of tech. Presented at the Moral Economies, Economic Moralities Conference, Society for the Advancement of Socio-Economics, Berkeley, CA, June 24 2016. https://sase.org/event/2016-berkeley/. Accessed 16 November 2018
Citron DK (2008) Technological due process. Wash Univ Law Rev 85(6):1249–1313
Clark L (2015) Uber denies researchers’ ‘phantom cars’ map claim. Wired UK, July 28. https://www.wired.co.uk/article/uber-cars-always-in-real-time. Accessed 20 April 2017
Coeckelbergh M, Wackers G (2007) Imagination, distributed responsibility and vulnerable technological systems: the case of snorre a. Science & Engineering Ethics 13, no 2(June 2007):235–248
De Santoni Sio F, van den Hoven J (2018) Meaningful human control over autonomous systems: a philosophical account. Frontiers In Robotics and AI 5(15). https://doi.org/10.3389/frobt.2018.00015
Coeckelbergh M, Wackers G (2007) Imagination, distributed responsibility and vulnerable technological systems: the case of snorre a. Sci Eng Ethics 13(2):235–248
Dennett DC (1997) When HAL kills, Who's to blame? Computer ethics. In: Stork DG (ed) HAL's Legacy: 2001's Computer as Dream and Reality. MIT Press, Cambridge, Mass, pp 351–365
Duff RA (1998) Responsibility. Routledge Encyclopedia of Philosophy. https://doi.org/10.4324/9780415249126-L085-1
Duhigg C (2012) How companies learn your secrets. The New York Times, February 16, sec. Magazine. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html. Accessed 22 January 2019
Elish MC (2019) Moral crumple zones: cautionary tales in human-robot interaction. Engag Sci Technol Soc 5:40–60
Eubanks V (2018) Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York
Facebook (2018) Community standards enforcement. May 2018. https://transparency.facebook.com/community-standards-enforcement. Accessed 2 March 2019
Fischer JM, Ravizza MSJ (2000) Responsibility and control : a theory of moral responsibility. Cambridge University Press, Cambridge, U.K.; New York
Goodin RE (1986) Responsibilities. Philos Q 36(142):50–56
Goodin RE (1987) Apportioning responsibilities. Law Philos 6(August):167–185. https://doi.org/10.1007/BF00145427
Harford T (2014) Big data: are we making a big mistake? Financial Times. March 28. https://www.ft.com/content/21a6e7d8-b479-11e3-a09a-00144feabdc0. Accessed 1 March 2019
Hart HLA (1968) Punishment and responsibility; essays in the philosophy of law. Oxford University Press, New York
Houston Federation of Teachers (n.d.) Local 2415 v. Houston Independent School District (S.D. Tex.).” Defendant's Original Answer and Defenses
Houston Independent School District (2015) EVAAS/Value-Added Frequently Asked Questions. http://static.battelleforkids.org/documents/HISD/EVAAS-Value-Added-FAQs-Final-2015-02-02.pdf. Accessed 11 October 2018
Isaac M (2017) How Uber deceives the authorities worldwide. The New York Times, December 22, sec. Technology. https://www.nytimes.com/2017/03/03/technology/uber-greyball-program-evade-authorities.html. Accessed 11 October 2018
Isenberg E, Heinrich H (2012) Measuring school and teacher value added in DC, 2011-2012 school year. Final report. Mathematica Policy Research, Inc. https://eric.ed.gov/?id=ED565712. Accessed 30 December 2018
Kaminski ME (2019) The right to explanation explained. Berkeley Technology Law Journal 34(1):189–218
Keller D (2018) The right tools: Europe’s intermediary liability Laws and the EU 2016 general data protection regulation. Berkeley Technology Law Journal 33(1):287–364
Kutz C (2004) Responsibility. In: Coleman J, Shapiro S, Himma KE (eds) The Oxford handbook of jurisprudence and philosophy of law, pp 548–587
Levin S (2019) Facebook cracks down on discriminatory ads after years of backlash. The Guardian, March 19, sec. Technology. https://www.theguardian.com/technology/2019/mar/19/facebook-advertising-discrimination-lawsuit-aclu-race-gender. Accessed 21 March 2019
Levin S, Wong JC (2018) Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. The Guardian, March 19, sec. Technology. https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe. Accessed 21 March 2019
Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183. https://doi.org/10.1007/s10676-004-3422-1
Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data & Society 3(2):205395171667967. https://doi.org/10.1177/2053951716679679
Nissenbaum H (1994) Computing and accountability. Communications of the Association for Computing Machinery 37(1):72–80
Noble SU (2018) Algorithms of oppression : how search engines reinforce racism. New York University Press, New York
O’Neil C (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, New York
Oremus W, Carey B (2017) Facebook let advertisers target ‘Jew-haters.’ It Doesn’t end there. Slate, September 14. https://slate.com/technology/2017/09/facebook-let-advertisers-target-jew-haters-it-doesnt-end-there.html. Accessed 13 December 2018
Oshana MAL (1997) Ascriptions of responsibility. Am Philos Q 34(1):71–83
Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge, Massachusetts
Rosenblat A (2018) Uberland: how algorithms are rewriting the rules of work. University of California Press, Oakland, California
Rubel A, Castro C, Pham A (2019) Agency Laundering and Algorithmic Decision Systems. In: Taylor N, Christian-Lamb C, Martin M, Nardi B (eds.) Proceedings of the 2019 iConference, Information in Contemporary Society (Lecture Notes in Computer Science). Springer Nature, pp 590-598
Sandberg S (2017) Last week we temporarily disabled some of our ads tools. Facebook. September 20. https://www.facebook.com/sheryl/posts/10159255449515177. Accessed 11 December 2018
Sanders WL, Wright SP, Rivers JC, Leandro JG (2009) Addressing common concerns about value-added modeling. SAS. https://www.sas.com/content/dam/SAS/en_us/doc/whitepaper1/value-added-modeling-107101.pdf. Accessed 22 December 2018
Scanlon T (2008) Moral dimensions : permissibility, meaning, blame. Belknap Press of Harvard University Press, Cambridge, Mass
Scheiber N (2017) How Uber uses psychological tricks to push its drivers’ buttons. The New York Times, April 2, sec. Technology. https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html. Accessed 11 March 2019
Selbst A, Powles J (2017) Meaningful information and the right to explanation. International Data Privacy Law 7(4):233–242
Shoemaker D (2011) Attributability, answerability, and accountability: toward a wider theory of moral responsibility. Ethics 121(3):602–632
Smith AM (2012) Attributability, answerability, and accountability: in defense of a unified account. Ethics 112(3):575–589
Strawson P (1962) Freedom and resentment. Proceedings of the British Academy 48:1–25
Sweeney L (2013) Discrimination in online ad delivery. ArXiv:1301.6822 [Cs], January. http://arxiv.org/abs/1301.6822. Accessed 15 October 2018
Vincent NA (2011) A structured taxonomy of responsibility concepts. In: Vincent NA, van de Poel I, van den Hoven J (eds) Moral Responsibility: Beyond Free Will and Determinism. Springer, Dordrecht, Netherlands, pp 15–35. https://doi.org/10.1007/978-94-007-1878-4_2
Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7(2):76–99
Wagner B (2019) Liable, but not in control? Ensuring meaningful human Agency in Automated Decision-Making Systems. Policy Internet 11(1):104–122
Williams G (2008) Responsibility as a virtue. Ethical Theory Moral Pract 11(4):455–470
Wisconsin v. Loomis (2016) 2016 WI 68, 881 N.W.2d 749
Acknowledgements
We would like to thank audiences at Eindhoven Technological University, Delft Technological University, Twente Technological University, the Zicklin Center for Normative Business Ethics at the Wharton School of Business,University of Pennsylvania, the iConference, and the Privacy Law Scholars Conference. In particular, we would like to thank Richard Warner, Filippo Santoni Del Sio, Sven Nyholm, Owen King, Philip Jansen, Suresh Venkatasubramanian,and three anonymous reviews for this journal for their enormously helpful comments.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Disclaimers
We presented a paper discussing the idea of agency laundering at the 2019 iConference (March 31 – April 4, 2019, Washington, D.C.) which appears in the conference proceedings (Rubel et al. 2019). The account of agency laundering in this paper is expanded and has changed substantially from the account in those proceedings. Most importantly, the conference paper did not tie agency laundering to an account of responsibility, and instead invoked concepts of de facto and de jure power. The conference paper did not include many of the examples used here (e.g., Democratic Chair, Uber, HISD). The conference paper did discuss the Facebook and Loomis examples, but the treatment has changed in accord with the changes to the account of laundering.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Rubel, A., Castro, C. & Pham, A. Agency Laundering and Information Technologies. Ethic Theory Moral Prac 22, 1017–1041 (2019). https://doi.org/10.1007/s10677-019-10030-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10677-019-10030-w