Skip to main content

Advertisement

Log in

Legitimacy and automated decisions: the moral limits of algocracy

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy” (rule by algorithm), and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms.

One of the best-known examples of philosophical analyses of algocracy is John Danaher’s “The threat of algocracy” (2016), arguing that government by algorithm undermines political legitimacy. In this paper, I will treat Danaher’s argument as a springboard for raising additional questions about the connections between algocracy, comprehensibility, and legitimacy, especially in light of empirical results about what we can expect the voters and policymakers to know.

The paper has the following structure: in Sect. 2, I introduce the basics of Danaher’s argument regarding algocracy. In Sect. 3 I argue that the algocratic threat to legitimacy has troubling implications for social justice. In Sect. 4, I argue that, nevertheless, there seem to be good reasons for governments to rely on algorithmic decision support systems. Lastly, I try to resolve the apparent tension between the findings of the two preceding Sections.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Availability of Data and Material

n/a

Code Availability

n/a.

Notes

  1. Accordingly, the conclusion of the amended argument would be something to the effect that (6’) Prima facie, under algocracy, the legitimacy of governments’ decisions is diminished.

  2. Given that marginalized communities are in fact (sometimes intentionally) targeted by algorithm-driven policies, this makes the problem more acute.

  3. Danziger et al., (2011) for evidence that judicial decisions tend to be more lenient just after food breaks, and get harsher the more time passed from the most recent food break.

  4. See Eren & Mocan (2018) for evidence that juvenile court judges’ sentences are harsher after their local football team unexpectedly loses a game.

  5. See Stewart (1980)and Downs & Lyons (1991) for evidence that the defendant’s attractiveness influences sentencing length.

  6. See Englich et al., (2006) and a review by Peer & Gamliel (2013) for evidence of judges’ thinking being influenced by cognitive shortcuts and biases. Also note that all the research cited in footnotes 3–6 concerns legal professionals (especially judges), not the general population.

  7. See e.g. Lemieux (2004) for a brief introduction to the way of thinking about public officials as self-interested utility maximizers rather than as exclusively concerned with the pursuit of the common good.

  8. On experts, see Cassidy & Buede (2009) and, in general, Koppl (2018). On policymakers’ cognitive and motivational errors, see in general Cairney & Kwiatkowski (2017), and Houghton (2008) and Yetiv (2013) for research on biases in the context of foreign policy decision-making.

  9. Nevertheless, there is some controversy about such claims. In a well-publicized article, Dressel & Farid (2018) have found that the infamous COMPAS recidivism prediction algorithm is no more accurate in its predictions than a random sample of non-experts. However, though the result has been replicated in subsequent work by Lin Zhiyuan et al., (2020), the latter authors have also found that changing aspects of the experimental setup reintroduced the machine advantage over humans, and that such new setups were importantly similar to what one can expect in real-world scenarios.

  10. I set aside here the widely discussed issue of fairness of such decisions and assume that the increase in accuracy of judgments is not traded off against more biased decisions.

  11. Some legal scholars embrace this type of legal skill: “Law is not all reasoning and analysis-it is also emotion and judgment and intuition and rhetoric. It includes knowledge that cannot always be explained, but that is no less valid for that [emphasis added]” (Gewirtz, 1995).

  12. See Ebbesen & Konečni (1981) for details.

  13. See Ebbesen & Konečni (1975) for details.

  14. See Raine & Willson (1995) for details.

  15. Moreover, as Schwartzman (2008) catalogs, surprisingly many legal scholars have advocated for the view that judges ought sometimes to conceal the real reasons for their decisions from the public, the position sometimes explicitly justified by appeal to maintaining the judiciary’s legitimacy (e.g. (Idleman, 1994)). However, if such arguments were sound, they could also apply to algorithmic decision-making, where real reasons for some decision could remain obscured and insincere reasons provided instead.

  16. A similar point about a “double standard” with regards to transparency has been proposed by Zerilli et al., (2019). See also Robbins (2019), and some arguments in Zarsky (2013) for different types of skepticism about the transparency ideal.

  17. Interestingly, this conclusion suggests that when real-world policymakers and enforcers insist on transparency to the detriment of other objectives (as some interviewed by Veale et al., (2018) do), they aren’t necessarily doing the right thing.

References

  • Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. Retrieved from https://doi.org/10.1177/2053951715622512. doi:10.1177/2053951715622512

  • Cadigan, T. P., & Lowenkamp, C. T. (2011). Implementing risk assessment in the federal pretrial services system. Federal Probation, 75(2), 30–38

    Google Scholar 

  • Cairney, P., & Kwiatkowski, R. (2017). How to communicate effectively with policymakers: combine insights from psychology and policy studies. Palgrave Communications, 3(1), 37. Retrieved from https://doi.org/10.1057/s41599-017-0046-8. doi:10.1057/s41599-017-0046-8

  • Caplan, B. D. (2007). The myth of the rational voter: why democracies choose bad policies. Princeton: Princeton University Press

    Google Scholar 

  • Caplan, B. D. (2018). The case against education: why the education system is a waste of time and money. Princeton, New Jersey: Princeton University Press

    Book  Google Scholar 

  • Cassidy, M. F., & Buede, D. (2009). Does the accuracy of expert judgment comply with common sense. Management Decision, 47(3), 454–469. Retrieved from https://doi.org/10.1108/00251740910946714. doi:10.1108/00251740910946714

  • Christin, A., Rosenblat, A., & Boyd, D. (2015). Courts and predictive algorithms. Data & civil rights: A new era of policing and justice, 1–13. Retrieved from https://datasociety.net/wp-content/uploads/2015/10/Courts_and_Predictive_Algorithms.pdf

  • Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29(3), 245–268. Retrieved from https://doi.org/10.1007/s13347-015-0211-1. doi:10.1007/s13347-015-0211-1

  • Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889–6892

  • Downs, A. C., & Lyons, P. M. (1991). Natural Observations of the Links between Attractiveness and Initial Legal Judgments. Personality and Social Psychology Bulletin, 17(5), 541–547. Retrieved from https://doi.org/10.1177/0146167291175009. doi:10.1177/0146167291175009

  • Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. Retrieved from https://www.science.org/doi/abs/10.1126/sciadvhttps://doi.org/10.1126/sciadv.aao5580

  • Ebbesen, E. B., & Konečni, V. J. (1975). Decision making and information integration in the courts: The setting of bail. Journal of Personality and Social Psychology, 32(5), 805

    Article  Google Scholar 

  • Ebbesen, E. B., & Konečni, V. J. (1981). The process of sentencing adult felons. In B. D. Sales (Ed.), The trial process (pp. 413–458). Springer

  • Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts’ judicial decision making. Personality and Social Psychology Bulletin, 32(2), 188–200

    Article  Google Scholar 

  • Eren, O., & Mocan, N. (2018). Emotional judges and unlucky juveniles. American Economic Journal: Applied Economics, 10(3), 171–205

    Google Scholar 

  • Estlund, D. M. (2008). Democratic authority: a philosophical framework. Princeton, N.J.: Princeton University Press

    Google Scholar 

  • Eubanks, V. (2017). Automating inequality: how high-tech tools profile, police, and punish the poor (First Edition). New York, NY: St. Martin’s Press

  • Fink, K. (2018). Opening the government’s black boxes: freedom of information and algorithmic accountability. Information, Communication & Society, 21(10), 1453–1471

    Article  Google Scholar 

  • Garb, H. N., & Wood, J. M. (2019). Methodological advances in statistical prediction. Psychological assessment, 31(12), 1456

    Article  Google Scholar 

  • Gewirtz, P. (1995). On ‘I Know It When I See It’. Yale Law Journal, 105(4), 1023–1048. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/ylr105&i=1057

  • Goodman-Delahunty, J., & Sporer, S. L. (2010). Unconscious influences in sentencing decisions: a research review of psychological sources of disparity. Australian Journal of Forensic Sciences, 42(1), 19–36. Retrieved from https://doi.org/10.1080/00450610903391440. doi:10.1080/00450610903391440

  • Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. Paper presented at the Proceedings of the conference on fairness, accountability, and transparency

  • Grgić-Hlača, N., Engel, C., & Gummadi, K. P. (2019). Human decision making with machine assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–25

  • Houghton, D. P. (2008). Invading and occupying Iraq: Some insights from political psychology. Peace and Conflict, 14(2), 169–192

    Article  MathSciNet  Google Scholar 

  • Idleman, S. C. (1994). Prudential Theory of Judicial Candor. Texas Law Review, 73(6), 1307–1418. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/tlr73&i=1325

  • Koppl, R. (2018). Expert failure (1 Edition. ed.). New York: Cambridge University Press

  • Krek, A. (2005). Rational ignorance of the citizens in public participatory planning. Paper presented at the 10th symposium on Information-and communication technologies (ICT) in urban planning and spatial development and impacts of ICT on physical space, CORP

  • Lemieux, P. (2004). The public choice revolution. Regulation, 27, 22

    Google Scholar 

  • Lin Zhiyuan, J., Jung, J., Goel, S., & Skeem, J. (2020). The limits of human predictions of recidivism. Science Advances, 6(7), eaaz0652. Retrieved from https://doi.org/10.1126/sciadv.aaz0652. doi:10.1126/sciadv.aaz0652

  • Peer, E., & Gamliel, E. (2013). Heuristics and Biases in Judicial Decisions. Court Review, 49(2), 114–119. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/ctrev49&i=114

  • Raine, J. W., & Willson, M. J. (1995). Conditional Bail Or Bail with Conditions?: The Use and Effectiveness of Bail Conditions. Institute of Local Government Studies, the University of Birmingham

  • Robbins, S. (2019). A misdirected principle with a catch: explicability for AI. Minds and Machines, 29(4), 495–514

    Article  Google Scholar 

  • Schwartzman, M. (2008). Judicial sincerity.Virginia Law Review,987–1027

  • Somin, I. (2015). Rational ignorance. Routledge international handbook of ignorance studies, 274–281

  • Stewart, J. E. (1980). Defendant’s Attractiveness as a Factor in the Outcome of Criminal Trials: An Observational Study 1. Journal of Applied Social Psychology, 10(4), 348–361

    Article  Google Scholar 

  • Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Paper presented at the Proceedings of the 2018 chi conference on human factors in computing systems

  • Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual Explanations without Opening the Black Box: Automated Decisions and the Gdpr. Harvard Journal of Law & Technology, 31(2), 841

    Google Scholar 

  • Yetiv, S. A. (2013). National security through a cockeyed lens: How cognitive bias impacts US foreign policy. JHU Press

  • Zarsky, T. Z. (2013). Transparent Predictions. University of Illinois Law Review, 2013(4), 1503–1570. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/unilllr2013&i=1537

  • Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in algorithmic and human decision-making: is there a double standard? Philosophy & Technology, 32(4), 661–683

    Article  Google Scholar 

Download references

Acknowledgements

I am grateful to Daan Kolkman and Anthony Skelton for valuable comments about the manuscript.

Funding

n/a.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bartek Chomanski.

Ethics declarations

Conflicts of Interest/Competing Interests

n/a

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chomanski, B. Legitimacy and automated decisions: the moral limits of algocracy. Ethics Inf Technol 24, 34 (2022). https://doi.org/10.1007/s10676-022-09647-w

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s10676-022-09647-w

Keywords

Navigation