Abstract
One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.
This is a preview of subscription content, access via your institution.
Notes
The possibility of an intelligent AI controlling the world is explored at length in Bostrom 2014.
I add ‘computer programmed’ here since algorithms are, in effect, recipes or step-by-step instructions for deriving outputs from a set of inputs. As such, algorithms do not need to be implemented by some computer architecture, but I limit interest to computer-programmed variants because the threat of algocracy is acutely linked to the data revolution (Kitchin 2014a).
Dormehl gives some striking illustrations of bureaucratic systems that are automated, e.g. the facial recognition algorithm system used to revoke driving licences in Massachusetts (Dormehl 2014, 157–58)
There are also connections here with Lessig’s work (1999 and 2006) on code as a type of regulatory architecture. Lessig is concerned primarily with who owns and controls that architecture; I am concerned with ways in which that architecture facilitates a lack of transparency in public decision-making.
Debates about other systems, e.g. automated cars and weapon systems, can raise other moral and political issues.
For an overview, see the Stanford Law Review symposium issue on Privacy and Big Data. Available at: http://www.stanfordlawreview.org/online/privacy-and-big-data (visited 10/4/14)
The Edward Snowden controversy being, perhaps, the most conspicuous example of this.
For example, the European Directive on this is Directive 95/46/EC
Case C-293/12 (joined with Case C-594/12 Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources, and Ors 8th April 2014
Ibid, para. 65
There may also, of course, be a connection here with a more substantive conception of justice (Ceva 2012).
The oddness reflects arguments in the consequentialist/deontologist debate in ethics.
A classic example would be if the sub-population satisfies the conditions for the Condorcet Jury Theorem or one of its extrapolations (e.g. List and Goodin, 2001).
This is a reference to the work of Michael Polanyi (1966).
Estlund offers alternative arguments for thinking that epistocracies are politically problematic. These have to do with reasonable rejection on the grounds of suspicion of the epistemic elite. I ignore those arguments here since they tie into his conflation of epistocracy with rule by a stable group of generally superior human agents.
Morozov (2013)—see the subsection entitled ‘Even programmes that seem innocuous can undermine democracy’ for this quote.
The society that worries Morozov is no imaginative dystopia. It is actively pursued by some: see Alex Pentland (2014)
I take this illustration from the artist James Bridle who uses it in some of his talks. See http://shorttermmemoryloss.com/ for more.
For the time being anyway. It is likely that, in the future, robot workers will take over such systems. Amazon already works with Kiva robots in some warehouses. See http://www.youtube.com/watch?v=3UxZDJ1HiPE (visited 1/3/15) for a video illustration.
For example, neural network models are widely recognized as having an interpretability problem. See, for example, the discussion in Miner et al. 2014, 249.
It is also worth noting that ‘interpretability’, for many working in this field, seems to mean ‘interpretability by appropriately trained peers’. This would be insufficient for political purposes.
I would like to thank an anonymous reviewer for encouraging further discussion of this issue.
A stark example of this is the Pavlok, a technology which uses basic principles of psychological conditioning to encourage behavioural change. See http://pavlok.com—note how the website promises to ‘break bad habits in five days’.
Directive 95/46/EC, Art. 15.3
David Brin, one of the chief proponents of sousveillance, has explicitly argued for this in response to Morozov’s worries about the threat to democracy posed by algocratic control (reference omitted for anonymity)
Of course, there may be some processing whenever sousveillance technologies record digital and audio information, but that is not the kind of processing and sorting that would be made possible if humans had their own mining algorithms.
See, generally, http://quantifiedself.com; Thompson (2013) also discusses the phenomenon. The story of Chris Dancy, a Denver-based IT executive who is known as the world’s ‘most connected man’, might also be instructive. Dancy wears up to ten data-collection devices on his person every day, in addition to other non-wearable devices. He claims that this has greatly improved his life. See http://www.dw.de/worlds-most-connected-man-finds-better-life-through-data/a-17600597 for an interview with him (accessed 1/3/15).
This is the vision of transhumanists like Ray Kurzweil who seek to saturate the cosmos with our intelligence, i.e. to make everything in the universe an extension of and input into our cognitive processes (Kurzweil 2006, 29).
References
Agar, N. (2013). Truly human enhancement. Cambridge, MA: MIT Press.
Ali, MA and Mann, S. (2013). The inevitability of the transition from a surveillance society to a veillance society: moral and economic grounding for sousveillance. IEEE International Symposium on Technology and Society ISTAS 243–254 (available at http://wearcam.org/veillance/IEEE_ISTAS13_Veillance2_Ali_Mann.pdf accessed 31/7/14)
Aneesh, A. (2006). Virtual Migration. Duke University Press
Aneesh, A. (2009). Global labor: algocratic modes of organization. Sociological Theory, 27(4), 347–370.
Andrejevic, M. (2014). The big data divide. International Journal of Communication, 8, 1673–1689.
Besson, S., & Marti, J. L. (2006). Deliberative democracy and its discontents. London: Ashgate.
Bishop, M. & Trout, JD. (2002). 50 years of successful predictive modeling should be enough: lessons for philosophy of science. Philosophy of Science: PSA 2000 Symposium Papers, 2002 69 (supplement): S197-S208
Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford: OUP.
Brin, D. (1997). The transparent society. New York: Basic Books.
Brynjolfsson, E., & McAfee, A. (2011). Race against the machine. Lexington, MA: Digital Frontiers Press.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: work, progress, and prosperity in a time of brilliant technologies. New York: WW Norton.
Bumbulsky, J. 2013. Chaotic Storage Lessons. Medium (available at https://medium.com/tech-talk/e3b7de266476 -accessed 1/3/15.
Ceva, E. (2012). Beyond legitimacy: can proceduralism say anything relevant about justice? Critical Review of International Social and Political Philosophy, 15, 183.
Chase Lipton, Z. (2015). The myth of model interpretability, KD Nuggets News 15:n3 – available at http://www.kdnuggets.com/2015/04/model-interpretability-neural-networks-deep-learning.html
Citron, D. (2010). Technological due process. Washington University Law Review, 85, 1249.
Citron, D., & Pasquale, F. (2014). The scored society: due process for automated predictions. Washington Law Review, 86, 101.
Clark, A. (2010). Supersizing the mind. Oxford: OUP.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 7–19.
Cowen, T. (2013). Average is over: powering America beyond the age of the great stagnation. New York: Dutton.
Crawford, K., & Schultz, J. (2014). Big data and due process: towards a framework to redress predictive privacy harms. Boston College Law Review, 55, 93.
Danaher, J. (2013). On the need for epistemic enhancement: democratic legitimacy and the enhancement project. Law, Innovation and Technology, 5(1), 85.
Estlund, D. (1993). Making truth safe for democracy. In D. Copp, J. Hampton, & J. Roemer (Eds.), The idea of democracy. Cambridge: Cambridge University Press.
Estlund, D. (2003). Why not Epistocracy? In Naomi Reshotko (ed) Desire, Identity, and Existence: Essays in Honour of T.M. Penner. Academic Printing and Publishing
Estlund, D. (2008). Democratic authority. Princeton: Princeton University Press.
Gaus, G. 2010. The order of public reason. Cambridge University Press
Greenfield, R. (2012). Inside the method to Amazon's beautiful warehouse madness. The Wire (available at http://www.thewire.com/technology/2012/12/inside-method-amazons-beautiful-warehouse-madness/59563/ - accessed 1/3/15.
Grove, W., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical statistical controversy. Psychology, Public Policy, and Law, 2, 293–323.
Habermas, J. (1990). Discourse ethics: notes on a program of philosophical justification. In Moral Consciousness and Communicative Action. Trans. Christian Lenhart and Shierry Weber Nicholson. Cambridge, MA: MIT Press.
Kellermeit, D. and Obodovski, D. (2013). The Silent Intelligence: The Internet of Things. DND Ventures LLC
Kitchin, R. (2014a). The data revolution: big data, open data, data infrastructures and their consequences. London: Sage.
Kitchin, R. (2014b). Thinking critically about researching algorithms. The Programmable City Working Paper 5 – available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786
Kitchin, R., & Dodge, M. (2011). Code/space: software and everyday life. Cambridge, MA: MIT Press.
Kurzweil, R. (2006). The singularity is near. London: Penguin Books.
Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.
Lessig, L. (2006). Code 2.0. New York: Basic Books
Lippert-Rasmussen, K. (2012). Estlund on epistocracy: a critique. Res Publica, 18(3), 241–258.
Lipschulz, R. and Hester, R. (2014). We are the Borg! Human Assimilation into Cellular Society. In Michael and Michael (eds). Uberveillance and the Social Implications of Microchip Implantation. IGI-Global
Lisboa, P. (2013). Interpretability in machine learning: principles and practice. In Masulli, F, Pasi, G and Yager, R (eds) Fuzzy Logic and Applications (Dordrecht: Springer, 2013)
List, C., & Goodin, R. (2001). Epistemic democracy: generalizing the Condorcet Jury Theorem. Journal of Political Philosophy, 9, 277.
Machin, D. (2009). The irrelevance of democracy to the public justification of political authority. Res Publica, 15, 103.
Mann, S. (2013). Veillance and reciprocal transparency: surveillance versus sousveillance, AR Glass, Lifeglogging, and Wearable Computing. Available at http://wearcam.org/veillance/veillance.pdf -- accessed 1/3/15.
Mann, S., Nolan, J., & Wellman, B. (2003). Sousveillance: inventing and using wearable computing devices for data collection in surveillance environments. Surveillance and Society, 3, 331–355.
Mayer-Schonberger, V. and Cukier, K. (2013). Big data: a revolution that will transform how we live work and think. John Murray.
Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence (pp. v–xii). Lanham, MD: Rowan & Littlefield/Jason Aronson. (Original work published 1954)
Miner, L et al. (2014). Practical Predictive Analytics and Decisioning-Systems for Medicine. Academic Press
Mittelstadt, B D, and Floridi, L. (2015). The ethics of big data: current and foreseeable issues in biomedical contexts. Science and Engineering Ethics. DOI: 10.1007/s11948-015-9652-2
Morozov, E. (2013). The real privacy problem. MIT Technology Review (available at: http://www.technologyreview.com/featuredstory/520426/the-real-privacy-problem/ - accessed 1/3/15)
Otte, C. (2013). Safe and interpretable machine learning: a methodological review. In C. Moewes & A. Nurnberger (Eds.), Computational Intelligence in Intelligent Data Analysis. Dordrecht: Springer.
Patterson, S. (2013). Dark pools: the rise of ai trading machines and the looming threat to wall street. Random House
Pentland, A. (2014). Social Physics. London: Penguin Press
Peter, F. (2008). Pure epistemic proceduralism. Episteme, 5, 33.
Peter, F. (2014). Political Legitimacy. In Edward N. Zalta (ed) The Stanford Encyclopedia of Philosophy Spring 2014 Edition -- available at http://plato.stanford.edu/archives/spr2014/entries/legitimacy/
Polanyi, M. (1966). The tacit dimension. New York: Doubleday.
Rifkin, J. (2014). The Zero Marginal Cost Society: The Internet of Things, The Collaborative Commons and the Eclipse of Capitalism. Palgrave MacMillan.
Seaver, N. (2013). Knowing algorithms. In Media in Transition 8, Cambridge MA
Siegel, E. (2013). Predictive analytics: the power to predict who will click, buy, lie or die. John Wiley and Sons
Slater, D. (2013). Love in a time of Algorithms. Current
Thompson, C. (2013). Smarter than you think: how technology is changing our minds for the better. London: William Collins.
Vellido, A, Martín-Guerrero, J. and Lisboa, P. (2012). Making machine learning models interpretable. Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
Zarsky, T. (2011). Governmental data-mining and its alternatives. Penn State Law Review, 116, 285.
Zarsky, T. (2012). Automated predictions: perception, law and policy. Communications of the ACM, 15(9), 33–35.
Zarsky, T. (2013). Transparent prediction. University of Illinois Law Review, 4, 1504.
Zeng, J, Ustun, B and Rudin, C. (2015). Interpretable Classification Models for Recidivism Prediction. MIT Working Paper, available at http://arxiv.org/pdf/1503.07810v2.pdf
Acknowledgments
The author would like to thank audiences at Exeter and Maynooth Universities, and two anonymous referees for feedback on earlier drafts of this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethical Statement
The author declares no conflicts of interest. Research for this paper was not funded nor did it involve any work involving human or animal subjects.
Rights and permissions
About this article
Cite this article
Danaher, J. The Threat of Algocracy: Reality, Resistance and Accommodation. Philos. Technol. 29, 245–268 (2016). https://doi.org/10.1007/s13347-015-0211-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13347-015-0211-1
Keywords
- Algocracy
- Epistocracy
- Big data
- Data mining
- Legitimacy
- Human enhancement