Abstract
The judgments of human beings can be biased; they can also be noisy. Across a wide range of settings, use of algorithms is likely to improve accuracy, because algorithms will reduce both bias and noise. Indeed, algorithms can help identify the role of human biases; they might even identify biases that have not been named before. As compared to algorithms, for example, human judges, deciding whether to give bail to criminal defendants, show Current Offense Bias and Mugshot Bias; as compared to algorithms, human doctors, deciding whether to test people for heart attacks, show Current Symptom Bias and Demographic Bias. These are cases in which large data sets are able to associate certain inputs with specific outcomes. But in important cases, algorithms struggle to make accurate predictions, not because they are algorithms but because they do not have enough data to answer the question at hand. Those cases often, though not always, involve complex systems. (1) Algorithms might not be able to foresee the effects of social interactions, which can depend on a large number of random or serendipitous factors, and which can lead in unanticipated and unpredictable directions. (2) Algorithms might not be able to foresee the effects of context, timing, or mood. (3) Algorithms might not be able to identify people’s preferences, which might be concealed or falsified, and which might be revealed at an unexpected time. (4) Algorithms might not be able to anticipate sudden or unprecedented leaps or shocks (a technological breakthrough, a successful terrorist attack, a pandemic, a black swan). (5) Algorithms might not have “local knowledge,” or private information, which human beings might have. Predictions about romantic attraction, about the success of cultural products, and about coming revolutions are cases in point. The limitations of algorithms are analogous to the limitations of planners, emphasized by Hayek in his famous critique of central planning. It is an unresolved question whether and to what extent some of the limitations of algorithms might be reduced or overcome over time, with more data or various improvements; calculations are improving in extraordinary ways, but some of the relevant challenges cannot be solved with ex ante calculations.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Friedrich Hayek, The Use of Knowledge in Society, 35 Am. Econ. Rev 519 (1945).
See Daniel Kahneman et al., Noise ch. 11 (2021).
For a skeptical view, see Peter Boettke and Rosolino Candela, On the Feasibility of Technosocialism, 205 J. Economic Behavior & Organization 44 (2023).
See Paul E. Meehl, Clinical Versus Statistical Prediction (2013 ed.; originally published 1953).
Jon Kleinberg et al., Human Decisions and Machine Predictions, 133 Q.J. Econ. 237 (2017).
Id. at 284.
Id.
Jens Ludwig & Sendhil Mullainathan, Algorithmic Behavioral Science: Machine Learning as a Tool for Scientific Discovery (Chicago Booth, Working Paper No. 22 − 15, 2022).
Id. at 2 (emphasis omitted).
Sendhil Mullainathan & Ziad Obermeyer, Diagnosing Physician Error: A Machine Learning Approach to Low-Value Health Care, 137 Q.J. Econ. 679 (2022).
See Amos Tversky & Daniel Kahneman, Judgment Under Uncertainty: Heuristics and Biases, in Judgment Under Uncertainty: Heuristics and Biases 3 (Daniel Kahneman et al. eds., 1982).
See Ping Li et al., Availability Bias Causes Misdiagnoses by Physicians: Direct Evidence from a Randomized Control Trial, 59 Internal Med. 3141 (2020).
See Dan P. Ly, The Influence of the Availability Heuristic on Physicians in the Emergency Department, 78 Analysis Emergency Med. 650 (2021); see also Carmen Fernández-Aguilar et al., Use of Heuristics During the Clinical Decision Process from Family Care Physicians in Real Conditions, 28 J. Evaluation Clinical Prac. 135 (2022).
See Daniel Kahneman & Shane Frederick, Representativeness Revisited: Attribute Substitution in Intuitive Judgment, in Heuristics and Biases: The Psychology of Intuitive Judgment 49, 49–81 (Thomas Gilovich et al. eds., 2002); Daniel Kahneman, Thinking, Fast and Slow (2011).
See Tversky & Kahneman, supra note 11.
Id. at 11.
Id.
Id.
See Robert H. Ashton & Jane Kennedy, Eliminating Recency with Self-Review: The Case of Auditors’ ‘Going Concern’ Judgments, 15 J. Behav. Decision Making 221 (2002).
See Paul Slovic, The Perception of Risk 40 (2000).
See Victoria Angelova et al., Algorithmic Recommendations and Human Discretion (Oct. 25, 2022; unpublished manuscript).
Daniel Kahneman et al., Noise: A Flaw in Human Judgment 143 (2021).
See id.
For more details, see id.
See Cass R. Sunstein & Reid Hastie, Wiser: Getting Beyond Groupthink to Make Groups Smarter (2014).
See Kahneman et al., supra note 22.
See Roy Shoval et al., Choosing to Choose or Not, 17 Judgment & Decision Making 768 (2022); Sebastian Bobadilla-Suarez et al., The Intrinsic Value of Choice: The Propensity to Under-Delegate in the Face of Potential Gains and Losses, 54 J. Risk & Uncertainty 187 (2017).
See Bobadilla-Suarez et al., supra note 27.
See Shoval et al., supra note 27.
See Berkeley J. Dietvorst et al., Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err, 144 J. Experimental Psych. 114 (2015).
Id.
See Michael Yeomans et al., Making Sense of Recommendations, 32 J. Behav. Decision Making 403 (2019).
Id.
See id.
See Jennifer Logg et al., Algorithm Appreciation: People Prefer Algorithmic to Human Judgment, 51 Organizational Behavior and Human Decision Processes 90 (2019).
See Yoyo Hou and Malte Yung, Who Is the Expert? Reconciling Algorithm Aversion and Algorithm Appreciation in AI-Supported Decision Making, 5 Proceedings of the ACM on Human-Computer Interaction 1 (2021).
For one account, see Cass R. Sunstein, How Change Happens (2019).
See Friedrich Hayek, The Theory of Complex Phenomena: In Honor of Karl R. Popper, in The Critical Approach to Science and Philosophy 332 − 59 (Mario Bunge ed. 1964).
Matthew Salganik et al., Measuring The Predictability of Life Outcomes With A Scientific Mass Collaboration, 117 PNAS no. 15 (2020).
See Samantha Joel et al., Is Romantic Desire Predicable? Machine Learning Applied to Initial Romantic Attraction, 28 Psych. Science 1478 (2017).
See Gerd Gigerenzer, How to Stay Smart in a Smart World (2022).
See Kahneman et al., supra note 22.
See Timur Kuran, Private Truths, Public Lies (1995).
See Matthew Salganik et al., Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market, 311 Science 854 (2006).
John Maynard Keynes, The General Theory of Employment, Interest and Money 113–14 (1936).
Funding
No funding to disclose.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Robert Walmsley University Professor, Harvard University. This is a revised text of a lecture given at King’s College in March 2023. I am grateful to Mark Pennington for superb comments on a previous draft; to Daniel Kahneman and Olivier Sibony for instructive discussions of bias and noise; and to Jon Kleinberg and Sendhil Mullainathan for instructive discussions of the limits of algorithms and data. I am also grateful to the audience at King’s College and to participants in a workshop at Harvard Law School for valuable help. None of the foregoing people is responsible for my errors.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sunstein, C.R. The use of algorithms in society. Rev Austrian Econ (2023). https://doi.org/10.1007/s11138-023-00625-z
Accepted:
Published:
DOI: https://doi.org/10.1007/s11138-023-00625-z