The Threat of Algocracy: Reality, Resistance and Accommodation

Abstract

One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    The possibility of an intelligent AI controlling the world is explored at length in Bostrom 2014.

  2. 2.

    I add ‘computer programmed’ here since algorithms are, in effect, recipes or step-by-step instructions for deriving outputs from a set of inputs. As such, algorithms do not need to be implemented by some computer architecture, but I limit interest to computer-programmed variants because the threat of algocracy is acutely linked to the data revolution (Kitchin 2014a).

  3. 3.

    Dormehl gives some striking illustrations of bureaucratic systems that are automated, e.g. the facial recognition algorithm system used to revoke driving licences in Massachusetts (Dormehl 2014, 157–58)

  4. 4.

    There are also connections here with Lessig’s work (1999 and 2006) on code as a type of regulatory architecture. Lessig is concerned primarily with who owns and controls that architecture; I am concerned with ways in which that architecture facilitates a lack of transparency in public decision-making.

  5. 5.

    Debates about other systems, e.g. automated cars and weapon systems, can raise other moral and political issues.

  6. 6.

    For an overview, see the Stanford Law Review symposium issue on Privacy and Big Data. Available at: http://www.stanfordlawreview.org/online/privacy-and-big-data (visited 10/4/14)

  7. 7.

    The Edward Snowden controversy being, perhaps, the most conspicuous example of this.

  8. 8.

    For example, the European Directive on this is Directive 95/46/EC

  9. 9.

    Case C-293/12 (joined with Case C-594/12 Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources, and Ors 8th April 2014

  10. 10.

    Ibid, para. 65

  11. 11.

    There may also, of course, be a connection here with a more substantive conception of justice (Ceva 2012).

  12. 12.

    I am not sure that there are any pure instrumentalists, but those who endorse an epistemic theory of democracy certainly emphasise this virtue (Estlund 2008; List & Goodin 2001)

  13. 13.

    The oddness reflects arguments in the consequentialist/deontologist debate in ethics.

  14. 14.

    A classic example would be if the sub-population satisfies the conditions for the Condorcet Jury Theorem or one of its extrapolations (e.g. List and Goodin, 2001).

  15. 15.

    This is a reference to the work of Michael Polanyi (1966).

  16. 16.

    Estlund offers alternative arguments for thinking that epistocracies are politically problematic. These have to do with reasonable rejection on the grounds of suspicion of the epistemic elite. I ignore those arguments here since they tie into his conflation of epistocracy with rule by a stable group of generally superior human agents.

  17. 17.

    Morozov (2013)—see the subsection entitled ‘Even programmes that seem innocuous can undermine democracy’ for this quote.

  18. 18.

    The society that worries Morozov is no imaginative dystopia. It is actively pursued by some: see Alex Pentland (2014)

  19. 19.

    I take this illustration from the artist James Bridle who uses it in some of his talks. See http://shorttermmemoryloss.com/ for more.

  20. 20.

    For the time being anyway. It is likely that, in the future, robot workers will take over such systems. Amazon already works with Kiva robots in some warehouses. See http://www.youtube.com/watch?v=3UxZDJ1HiPE (visited 1/3/15) for a video illustration.

  21. 21.

    For example, neural network models are widely recognized as having an interpretability problem. See, for example, the discussion in Miner et al. 2014, 249.

  22. 22.

    It is also worth noting that ‘interpretability’, for many working in this field, seems to mean ‘interpretability by appropriately trained peers’. This would be insufficient for political purposes.

  23. 23.

    I would like to thank an anonymous reviewer for encouraging further discussion of this issue.

  24. 24.

    I am indebted to DI for pressing me on this point. This reduction would raise similar kinds of concerns to those animating Lessig in his classic works on the topic (1999 & 2006).

  25. 25.

    A stark example of this is the Pavlok, a technology which uses basic principles of psychological conditioning to encourage behavioural change. See http://pavlok.com—note how the website promises to ‘break bad habits in five days’.

  26. 26.

    Directive 95/46/EC, Art. 15.3

  27. 27.

    David Brin, one of the chief proponents of sousveillance, has explicitly argued for this in response to Morozov’s worries about the threat to democracy posed by algocratic control (reference omitted for anonymity)

  28. 28.

    Of course, there may be some processing whenever sousveillance technologies record digital and audio information, but that is not the kind of processing and sorting that would be made possible if humans had their own mining algorithms.

  29. 29.

    See, generally, http://quantifiedself.com; Thompson (2013) also discusses the phenomenon. The story of Chris Dancy, a Denver-based IT executive who is known as the world’s ‘most connected man’, might also be instructive. Dancy wears up to ten data-collection devices on his person every day, in addition to other non-wearable devices. He claims that this has greatly improved his life. See http://www.dw.de/worlds-most-connected-man-finds-better-life-through-data/a-17600597 for an interview with him (accessed 1/3/15).

  30. 30.

    This is the vision of transhumanists like Ray Kurzweil who seek to saturate the cosmos with our intelligence, i.e. to make everything in the universe an extension of and input into our cognitive processes (Kurzweil 2006, 29).

References

  1. Agar, N. (2013). Truly human enhancement. Cambridge, MA: MIT Press.

    Google Scholar 

  2. Ali, MA and Mann, S. (2013). The inevitability of the transition from a surveillance society to a veillance society: moral and economic grounding for sousveillance. IEEE International Symposium on Technology and Society ISTAS 243–254 (available at http://wearcam.org/veillance/IEEE_ISTAS13_Veillance2_Ali_Mann.pdf accessed 31/7/14)

  3. Aneesh, A. (2006). Virtual Migration. Duke University Press

  4. Aneesh, A. (2009). Global labor: algocratic modes of organization. Sociological Theory, 27(4), 347–370.

    Article  Google Scholar 

  5. Andrejevic, M. (2014). The big data divide. International Journal of Communication, 8, 1673–1689.

    Google Scholar 

  6. Besson, S., & Marti, J. L. (2006). Deliberative democracy and its discontents. London: Ashgate.

    Google Scholar 

  7. Bishop, M. & Trout, JD. (2002). 50 years of successful predictive modeling should be enough: lessons for philosophy of science. Philosophy of Science: PSA 2000 Symposium Papers, 2002 69 (supplement): S197-S208

  8. Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford: OUP.

    Google Scholar 

  9. Brin, D. (1997). The transparent society. New York: Basic Books.

    Google Scholar 

  10. Brynjolfsson, E., & McAfee, A. (2011). Race against the machine. Lexington, MA: Digital Frontiers Press.

    Google Scholar 

  11. Brynjolfsson, E., & McAfee, A. (2014). The second machine age: work, progress, and prosperity in a time of brilliant technologies. New York: WW Norton.

    Google Scholar 

  12. Bumbulsky, J. 2013. Chaotic Storage Lessons. Medium (available at https://medium.com/tech-talk/e3b7de266476 -accessed 1/3/15.

  13. Ceva, E. (2012). Beyond legitimacy: can proceduralism say anything relevant about justice? Critical Review of International Social and Political Philosophy, 15, 183.

    Article  Google Scholar 

  14. Chase Lipton, Z. (2015). The myth of model interpretability, KD Nuggets News 15:n3 – available at http://www.kdnuggets.com/2015/04/model-interpretability-neural-networks-deep-learning.html

  15. Citron, D. (2010). Technological due process. Washington University Law Review, 85, 1249.

    Google Scholar 

  16. Citron, D., & Pasquale, F. (2014). The scored society: due process for automated predictions. Washington Law Review, 86, 101.

    Google Scholar 

  17. Clark, A. (2010). Supersizing the mind. Oxford: OUP.

    Google Scholar 

  18. Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 7–19.

    Article  Google Scholar 

  19. Cowen, T. (2013). Average is over: powering America beyond the age of the great stagnation. New York: Dutton.

    Google Scholar 

  20. Crawford, K., & Schultz, J. (2014). Big data and due process: towards a framework to redress predictive privacy harms. Boston College Law Review, 55, 93.

    Google Scholar 

  21. Danaher, J. (2013). On the need for epistemic enhancement: democratic legitimacy and the enhancement project. Law, Innovation and Technology, 5(1), 85.

    Article  Google Scholar 

  22. Estlund, D. (1993). Making truth safe for democracy. In D. Copp, J. Hampton, & J. Roemer (Eds.), The idea of democracy. Cambridge: Cambridge University Press.

    Google Scholar 

  23. Estlund, D. (2003). Why not Epistocracy? In Naomi Reshotko (ed) Desire, Identity, and Existence: Essays in Honour of T.M. Penner. Academic Printing and Publishing

  24. Estlund, D. (2008). Democratic authority. Princeton: Princeton University Press.

    Google Scholar 

  25. Gaus, G. 2010. The order of public reason. Cambridge University Press

  26. Greenfield, R. (2012). Inside the method to Amazon's beautiful warehouse madness. The Wire (available at http://www.thewire.com/technology/2012/12/inside-method-amazons-beautiful-warehouse-madness/59563/ - accessed 1/3/15.

  27. Grove, W., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical statistical controversy. Psychology, Public Policy, and Law, 2, 293–323.

    Article  Google Scholar 

  28. Habermas, J. (1990). Discourse ethics: notes on a program of philosophical justification. In Moral Consciousness and Communicative Action. Trans. Christian Lenhart and Shierry Weber Nicholson. Cambridge, MA: MIT Press.

  29. Kellermeit, D. and Obodovski, D. (2013). The Silent Intelligence: The Internet of Things. DND Ventures LLC

  30. Kitchin, R. (2014a). The data revolution: big data, open data, data infrastructures and their consequences. London: Sage.

    Google Scholar 

  31. Kitchin, R. (2014b). Thinking critically about researching algorithms. The Programmable City Working Paper 5 – available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786

  32. Kitchin, R., & Dodge, M. (2011). Code/space: software and everyday life. Cambridge, MA: MIT Press.

    Google Scholar 

  33. Kurzweil, R. (2006). The singularity is near. London: Penguin Books.

    Google Scholar 

  34. Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.

    Google Scholar 

  35. Lessig, L. (2006). Code 2.0. New York: Basic Books

  36. Lippert-Rasmussen, K. (2012). Estlund on epistocracy: a critique. Res Publica, 18(3), 241–258.

    Article  Google Scholar 

  37. Lipschulz, R. and Hester, R. (2014). We are the Borg! Human Assimilation into Cellular Society. In Michael and Michael (eds). Uberveillance and the Social Implications of Microchip Implantation. IGI-Global

  38. Lisboa, P. (2013). Interpretability in machine learning: principles and practice. In Masulli, F, Pasi, G and Yager, R (eds) Fuzzy Logic and Applications (Dordrecht: Springer, 2013)

  39. List, C., & Goodin, R. (2001). Epistemic democracy: generalizing the Condorcet Jury Theorem. Journal of Political Philosophy, 9, 277.

    Article  Google Scholar 

  40. Machin, D. (2009). The irrelevance of democracy to the public justification of political authority. Res Publica, 15, 103.

    Article  Google Scholar 

  41. Mann, S. (2013). Veillance and reciprocal transparency: surveillance versus sousveillance, AR Glass, Lifeglogging, and Wearable Computing. Available at http://wearcam.org/veillance/veillance.pdf -- accessed 1/3/15.

  42. Mann, S., Nolan, J., & Wellman, B. (2003). Sousveillance: inventing and using wearable computing devices for data collection in surveillance environments. Surveillance and Society, 3, 331–355.

    Google Scholar 

  43. Mayer-Schonberger, V. and Cukier, K. (2013). Big data: a revolution that will transform how we live work and think. John Murray.

  44. Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence (pp. v–xii). Lanham, MD: Rowan & Littlefield/Jason Aronson. (Original work published 1954)

  45. Miner, L et al. (2014). Practical Predictive Analytics and Decisioning-Systems for Medicine. Academic Press

  46. Mittelstadt, B D, and Floridi, L. (2015). The ethics of big data: current and foreseeable issues in biomedical contexts. Science and Engineering Ethics. DOI: 10.1007/s11948-015-9652-2

  47. Morozov, E. (2013). The real privacy problem. MIT Technology Review (available at: http://www.technologyreview.com/featuredstory/520426/the-real-privacy-problem/ - accessed 1/3/15)

  48. Otte, C. (2013). Safe and interpretable machine learning: a methodological review. In C. Moewes & A. Nurnberger (Eds.), Computational Intelligence in Intelligent Data Analysis. Dordrecht: Springer.

    Google Scholar 

  49. Patterson, S. (2013). Dark pools: the rise of ai trading machines and the looming threat to wall street. Random House

  50. Pentland, A. (2014). Social Physics. London: Penguin Press

  51. Peter, F. (2008). Pure epistemic proceduralism. Episteme, 5, 33.

    Article  Google Scholar 

  52. Peter, F. (2014). Political Legitimacy. In Edward N. Zalta (ed) The Stanford Encyclopedia of Philosophy Spring 2014 Edition -- available at http://plato.stanford.edu/archives/spr2014/entries/legitimacy/

  53. Polanyi, M. (1966). The tacit dimension. New York: Doubleday.

    Google Scholar 

  54. Rifkin, J. (2014). The Zero Marginal Cost Society: The Internet of Things, The Collaborative Commons and the Eclipse of Capitalism. Palgrave MacMillan.

  55. Seaver, N. (2013). Knowing algorithms. In Media in Transition 8, Cambridge MA

  56. Siegel, E. (2013). Predictive analytics: the power to predict who will click, buy, lie or die. John Wiley and Sons

  57. Slater, D. (2013). Love in a time of Algorithms. Current

  58. Thompson, C. (2013). Smarter than you think: how technology is changing our minds for the better. London: William Collins.

    Google Scholar 

  59. Vellido, A, Martín-Guerrero, J. and Lisboa, P. (2012). Making machine learning models interpretable. Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

  60. Zarsky, T. (2011). Governmental data-mining and its alternatives. Penn State Law Review, 116, 285.

    Google Scholar 

  61. Zarsky, T. (2012). Automated predictions: perception, law and policy. Communications of the ACM, 15(9), 33–35.

    Article  Google Scholar 

  62. Zarsky, T. (2013). Transparent prediction. University of Illinois Law Review, 4, 1504.

    Google Scholar 

  63. Zeng, J, Ustun, B and Rudin, C. (2015). Interpretable Classification Models for Recidivism Prediction. MIT Working Paper, available at http://arxiv.org/pdf/1503.07810v2.pdf

Download references

Acknowledgments

The author would like to thank audiences at Exeter and Maynooth Universities, and two anonymous referees for feedback on earlier drafts of this paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to John Danaher.

Ethics declarations

Ethical Statement

The author declares no conflicts of interest. Research for this paper was not funded nor did it involve any work involving human or animal subjects.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. The Threat of Algocracy: Reality, Resistance and Accommodation. Philos. Technol. 29, 245–268 (2016). https://doi.org/10.1007/s13347-015-0211-1

Download citation

Keywords

  • Algocracy
  • Epistocracy
  • Big data
  • Data mining
  • Legitimacy
  • Human enhancement