, Volume 33, Issue 4, pp 545–556 | Cite as

Will big data algorithms dismantle the foundations of liberalism?

How the emergence of recommendation algorithms will shape the pursuit of happiness in the 21st century
  • Daniel FirstEmail author
Open Forum


In Homo Deus, Yuval Noah Harari argues that technological advances of the twenty-first century will usher in a significant shift in how humans make important life decisions. Instead of turning to the Bible or the Quran, to the heart or to our therapists, parents, and mentors, people will turn to Big Data recommendation algorithms to make these choices for them. Much as we rely on Spotify to recommend music to us, we will soon rely on algorithms to decide our careers, spouses, and commitments. Harari also predicts that next, the state will take away individuals’ rights to make their own choices about their lives. If Google knows where your children would flourish best in school, why should the state allow a fallible human parent to decide? Liberalism—which, as Harari uses this term, refers to a state of society in which human freedom to choose is respected and championed—will collapse. In this paper, I argue that Harari’s conception of the future implications of recommendation algorithms is deeply flawed, for two reasons. First, users will not rely on algorithms to make decisions for them because they have no reason to trust algorithms, which are developed by companies with their own incentives, such as profit. Second, for most of our life decisions, algorithms will not be able to be developed, because the factors relevant to the decisions we face are unique to our situation. I present an alternative depiction of the future: instead of relying on algorithms to make decisions for us, humans will use algorithms to enhance our decision-making by helping us consider the most relevant choices first and notice information we might not otherwise. Finally, I will also argue that even if computers could make many of our decisions for us, liberalism as a political system would emerge unscathed.


Liberalism Artificial intelligence Big data Philosophy Ethics Freedom Machine learning 



This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 16-44869. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.


  1. Adams RM (2006) A theory of virtue: excellence in being for the good. Oxford University Press, OxfordCrossRefGoogle Scholar
  2. Annas J (2005) Comments on John Doris’s lack of character. Philos Phenomenol Res 71(3):636–642CrossRefGoogle Scholar
  3. Appiah A (2008) Experiments in ethics. Harvard University Press, CambridgeGoogle Scholar
  4. Arpaly N (2005) Comments on lack of character by John Doris. Philos Phenomenol Res 71(3):643–647CrossRefGoogle Scholar
  5. Doris JM (1998) Persons, situations, and virtue ethics. Noûs 32:504–530CrossRefGoogle Scholar
  6. Doris JM (2002) Lack of character: personality and moral behavior. Cambridge University Press, New YorkCrossRefGoogle Scholar
  7. Doris JM (2005) Replies: evidence and sensibility. Philos Phenomenol Res 71(3):656–677CrossRefGoogle Scholar
  8. Fogg BJ (2002) Persuasive technology: using computers to change what we think and do. Ubiquity. 1:31–61, 211–255Google Scholar
  9. Fogg BJ (2009) A behavior model for persuasive design. In: Proceedings of the 4th international conference on persuasive technology, ACMGoogle Scholar
  10. Haidt J (2016) Moral Psychology: An Exchange. The New York Review of Books. Accessed 18 May 2017
  11. Harman G (1999) Moral philosophy meets social psychology: virtue ethics and the fundamental attribution error. Proc Aristot Soc 99:315–331CrossRefGoogle Scholar
  12. Harman G (2000) The nonexistence of character traits. Proc Aristot Soc 100:223–226CrossRefGoogle Scholar
  13. Henke N et al (2016) The age of analytics: competing in a data-driven world. McKinsey Global Institute. Accessed 18 May 2017
  14. Hunt L (2007) Inventing human rights: a history. WW Norton & Company, New YorkGoogle Scholar
  15. Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New YorkGoogle Scholar
  16. Keren G, Schul Y (2009) Two is not always better than one: a critical evaluation of two-system theories. Perspect Psychol Sci 4:533–550CrossRefGoogle Scholar
  17. Kitcher P (2012) Preludes to pragmatism: toward a reconstruction of philosophy. Oxford University Press, OxfordCrossRefGoogle Scholar
  18. Kramer ADI, Guillory JE, Hancock JT (2014) Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci 111(24):8788–8790CrossRefGoogle Scholar
  19. Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of Advances in neural information processing systems, Lake Tahoe, USA, Vol. 2, pp 1097–1105Google Scholar
  20. Kruglanski AW, Gigerenzer G (2011) Intuitive and deliberative judgements are based on common principles. Psychol Rev 118:97–109CrossRefGoogle Scholar
  21. Nee DE, Berman MG, Moore KS, Jonides J (2008) Neuro-scientific evidence about the distinction between short and long term memory. Curr Dir Psychol Sci 17:102–106CrossRefGoogle Scholar
  22. O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Publishing Group, New YorkzbMATHGoogle Scholar
  23. Ricci F, Rokach L, Shapira B (2011) Introduction to recommender systems handbook. Springer, USCrossRefGoogle Scholar
  24. Ross L, Nisbett RE (2011) The person and the situation: perspectives of social psychology. Pinter & Martin Publishers, LondonGoogle Scholar
  25. Rose D, Livengood J, Sytsma J, Machery E (2012) Deep trouble for the deep self. Philos Psychol 25(5):629–646CrossRefGoogle Scholar
  26. Rubinstein A (2008) Comments on neuroeconomics. Manuscript in preparation, Tel Aviv UniversityGoogle Scholar
  27. Schurger A, Sitt JD, Dehaene S (2012) An accumulator model for spontaneous neural activity prior to self-initiated movement. Proceedings of the National Academy of Sciences 109(42):E2904–E2913CrossRefGoogle Scholar
  28. Shaw T (2016) The Psychologists Take Power. Review of the righteous mind, by Jonathan Haidt. The New York review of books. Accessed 18 May 2017
  29. Sunstein CR (1999) Free markets and social justice. Oxford University PressGoogle Scholar
  30. Zhao Q, Adomavicius G, Maxwell Harper F, Willemsen M, Konstan J (2017) Toward better interactions in recommender systems: cycling and serpentining approaches for top-N item lists. In: Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing, 1444–1453Google Scholar

Copyright information

© Springer-Verlag London 2017

Authors and Affiliations

  1. 1.Data Science InstituteColumbia UniversityNew YorkUSA
  2. 2.Department of PhilosophyUniversity of CambridgeCambridgeUK

Personalised recommendations