Skip to main content

Fragmenting Epistemologies: Toward Philosophical Foundations for Machine Learning in Law

  • Chapter
  • First Online:
Justice in the Age of Agnosis

Part of the book series: Palgrave Socio-Legal Studies ((PSLS))

Abstract

In an age of agnosis, law’s world-building function might appear to offer some consistency. Powered by the surreal epistemology of due process, law claims repeatability and non-arbitrariness as key constructs for social organization. But law’s normative force adheres to language, meaning, and interpretation. Its promise of truth has always been pliable. Now further complicated by techno-solutionist approaches, a technological lens might reveal law’s promise of truth as ultimately undeliverable. In this chapter, I argue that legal sense-making is challenged by automation, specifically artificial intelligence solutions like ChatGPT. Using the framing of agnotology, or the deliberate centering of ignorance, this paper engages with the potential consequences of a future powered by machine learning in law, evaluating the epistemic consequences of using artificial cognizers in legal settings. I discuss the nexus between agnotology, epistemology, and the creation of legal knowledge. I explore the scope of proposals for using machine learning in law, focusing on AI advancements in natural language processing. I engage with legal philosophers on the topic of language's malleability. Building on rich scholarship in the fairness, accountability, and transparency space, this paper looks at machine learning approaches and their proposed contribution to law’s pursuit of truth.

This article was submitted for peer review in April 2022 and reflects the state of AI technology at that time. Arguments referencing the then-dominant large language model, GPT-3, extend to the now-dominant ChatGPT chatbot, and the GPT-3.5 and GPT-4 series.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Minneapolis: University of Minnesota Press, 1984).

  2. 2.

    danah boyd, “Agnotology and Epistemological Fragmentation” speech delivered to the to the Digital Public Library of America Conference (17 April 2019), online: Data & Society, https://points.datasociety.net/agnotology-and-epistemological-fragmentation-56aa3c509c6b.

  3. 3.

    Academic literature increasingly distinguishes between misinformation, which is incorrect but does not necessarily connote intention to mislead, from disinformation, which is deliberately misleading or biased. See generally, UW Bothwell & Cascadia College, Library Guides, “News: Fake News, Misinformation & Disinformation” (17 March 2022), online: UW Guides, https://guides.lib.uw.edu/c.php?g=345925&p=7772376.

  4. 4.

    boyd, supra note 2.

  5. 5.

    boyd, supra note 2.

  6. 6.

    boyd, supra note 2. Compellingly, she describes the tactics used by the Christchurch terrorist to exploit basic social media vulnerabilities to disseminate his message. The information he posted on the irreputable site 8chan had links to other his sites, but not to Facebook, since he didn’t want Facebook to know traffic had been routed from 8chan. His video, which eventually livestreamed his horrific murder of 50 worshippers in a mosque, began with mundane content of his driving around, to lull in unsuspecting viewers and hide his true intentions from content moderators.

  7. 7.

    boyd, supra note 2. By titling his manifesto with a white nationalist call sign, he both guaranteed that traditional media would cover his attack and lead unsuspecting viewers to search for the name of the call sign to better understand it, sending them to a data void: what boyd describes as a “treasure trove of anti-Semitic and white nationalist propaganda.”

  8. 8.

    Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (2021) In Conference on Fairness, Accountability, and Transparency (FAccT’21), March 3–10, 2021, Virtual Event, Canada [Bender & Gebru].

  9. 9.

    Abeba Birhane & Vinay Uday Prabhu, “Large image datasets: A pyrrhic win for computer vision?” at 1541. This quotation, also emphasized by Bender & Gebru, supra, is linked to Ruha Benjamin’s excellent exploration of racial bias in technological design. See generally, Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Cambridge, UK: Polity Press, 2019).

  10. 10.

    Many authors have been cataloguing such harms. See Bender & Gebru, supra note 8; Birhane & Prabhu, supra note 9; Benjamin, supra note 9. See also Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018); Joy Buolamwini & Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” (2018) 81 Proceedings of Machine Learning Research 1–15.

  11. 11.

    Joshua Fairfield, Runaway Technology (Cambridge: Cambridge University Press, 2021) at 152–153.

  12. 12.

    Ian Kerr, “Schrödinger’s Robot: Privacy in Uncertain States” (2019) 20 Theoretical Inquiries L 123.

  13. 13.

    Benjamin Alarie, “The Path of the Law: Toward Legal Singularity” (2016) 66 University of Toronto LJ 443.

  14. 14.

    Croissant cites Rescher, who characterizes the latter as contingencies of “choice, chance, or chaos,” which seems especially apt. Jennifer Croissant, “Agnotology: Ignorance and Absence or Towards a Sociology of Things That Aren’t There” (2014) 28 Social Epistemology 4 at 7.

  15. 15.

    boyd, supra note 2. This example appears frequently in the agnotology literature. See, e.g., Reece Walters, “Climate Change Denial: ‘Making Ignorance Great Again’” in Alana Barton & Howard Davis (eds), “Ignorance, Power and Harm: Agnotology and The Criminological Imagination” (Cham, CH: Palgrave Macmillan, 2018) 163.

  16. 16.

    Robert Proctor Manuela Fernandez Pinto, “Tensions in Agnotology: Normativity in the Studies of commercially driven ignorance” (2015) 45 Social Studies of Science 294 at 302.

  17. 17.

    Croissant, supra note 14 at 7.

  18. 18.

    Robert Cover, “Nomos and Narrative” (1983), 97 Harvard L Rev 4 at 6.

  19. 19.

    Kieran Tranter, “Nomoi are Mercurial” (7 Feb 2022), online: The Digital Constitutionalist, https://digi-con.org/nomoi-are-mercurial-why-cultural-legal-studies-of-science-fiction-matter/.

  20. 20.

    Cover, supra note 18 at 8–9.

  21. 21.

    Cover, supra note 18 at 9.

  22. 22.

    Shai Danziger et al., “Extraneous Factors in Judicial Decisions” (2011) 108:17 Proceedings of the National Academy of Sciences of the United States of America 6889 (JSTOR).

  23. 23.

    Ozkan Eren & Naci Mocan, “Emotional Judges and Unlucky Juveniles” (2018) 10 Applied Economics 171.

  24. 24.

    Eren & Mocan, supra note 23.

  25. 25.

    Technology law scholarship makes considerable use of Lessig’s “code is law” concept, which was largely inspired by the Chicago school’s approaches to law and economics. See generally, Lawrence Lessig, “The New Chicago School” (1998) 27 J Legal Studies 661 at 662.

  26. 26.

    Alarie, supra note 13 at 445.

  27. 27.

    Anthony Casey & Anthony Niblett, “A Framework for the New Personalization of Law” (2019) 86 University of Chicago Law Review 333.

  28. 28.

    Daniel Goldsworthy, “Dworkin’s Dream: Towards a Singularity of Law” (2019) 44 Alternative Law Journal 286.

  29. 29.

    Willem DeVries, Wilfrid Sellars, Stanford Encyclopedia of Philosophy, online: SEP, https://plato.stanford.edu/entries/sellars/. See also Kerr, supra note 12.

  30. 30.

    Wilfrid Sellars, Empiricism and the Philosophy of Mind (1997) at 66–75. Note that other epistemologists have softened the reflexivity requirement, observing that too stringently applying these standards will exclude small children and animals, who clearly possess some knowledge. See Chauncey Maher, The Pittsburgh School of Philosophy: Sellars, McDowell, Brandom (2012) at 93.

  31. 31.

    Posner argues against surveillance by computer as raising constitutional privacy concerns, writing: “[c]omputer searches do not invade privacy because search programs are not sentient beings. Only the human search should raise constitutional or other legal issues.” Richard A. Posner, “Privacy, Surveillance, and the Law” (2008) 75 U Chi L Rev 245 at 254.

  32. 32.

    Kerr, supra note 12 at 128.

  33. 33.

    Kerr, supra note 12 at 128, 144–147.

  34. 34.

    Mark Coeckelbergh, “Robot rights? Towards a Social-Relational Justification of Moral Consideration” (2010) 12 Ethics Info Tech 209 (discussing the application of deontological and virtue ethics for robot morality).

  35. 35.

    Ryan Abbott, “I Think, Therefore I Invent: Creative Computers and the Future of Patent Law” (2016) 57 BC L Rev 1079 (discussing whether computers “invent” things and whether they should be listed as inventors on patents).

  36. 36.

    Kerr, supra note 12 at 151.

  37. 37.

    Kerr is emphatic that an artificial cognizer’s epistemic qualities are non-binary and must be contextually evaluated in the context of our dense web of networked relationships. Kerr, supra note 12 at 153.

  38. 38.

    Kerr, supra note 12 at 154.

  39. 39.

    Arguably, privacy offers another nexus with the flip sides of the coin presented by epistemology and agnotology. As Alvin Goldman explains, “epistemology focuses on the means to knowledge enhancement, whereas privacy studies focus on the means to knowledge curtailment (at least decreasing knowledge in the hands of the wrong people).” Meanwhile, agnotology focuses on the deliberate obfuscation of knowledge. Alvin L. Goldman, Knowledge in a Social World (1999) at 173, cited in Kerr, supra note 12 at 139.

  40. 40.

    See, e.g., Jennifer Raso, “AI and Administrative Law” in Florian Martin-Bariteau & Teresa Scassa, eds., The Law of Artificial Intelligence in Canada (Ottawa: LexisNexis 2021) at 165.

  41. 41.

    See, e.g., Petra Molnar & Lex Gill, Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System (Toronto: Citizen Lab and International Human Rights Program, 2018), online: University of Toronto, https://tspace.library.utoronto.ca/bitstream/1807/94802/1/IHRP-Automated-Systems-Report-Web-V2.pdf.

  42. 42.

    The most notorious such sentencing software is COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions. COMPAS has been accused of bias against black offenders in a well-publicized report by independent journalist ProPublica. Julia Angwin et al., “Machine Bias” (23 May 2016); online ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  43. 43.

    Often associated with the GDPR, the idea of a “right to explanation” demands that users subject to algorithmic decision-making be able to access their own personal data and receive feedback on automated decision. See General Data Protection Regulation (EU) 2016/679, Articles 13–15.

  44. 44.

    Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2016) at 23.

  45. 45.

    Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: Picador, 2019) at 167.

  46. 46.

    Eubanks, supra note 45 at 178. Eubanks is especially concerned with automated decision-making tools that defeat procedural fairness, that entrench socioeconomic inequity, and that lead to a “digital poorhouse.”

  47. 47.

    T. Brown, B. Mann, et al. “Language Models are Few-Shot Learners” arXiv:2005.14165v4 [cs.CL] 22 July 2020 (the paper that introduced GPT-3 as a new NLP mechanism).

  48. 48.

    OpenAI makes GPT-3 available to users through an online API, largely obscuring its backend operations from those who wish to deploy its NLP capabilities. This greatly simplifies the use of GPT-3, enabling users to use simple English text prompts to access GPT-3’s abilities. See OpenAI’s own explanations of its API: “OpenAI API”, online: OpenAI Blog (11 June 2020), https://openai.com/blog/openai-api/. Google also has several large language models presented as competitors to GPT-3: these are GShard, which uses 600 billion parameters, and Switch-C, which uses 1.7 trillion. See Bender & Gebru, supra note 8 at 2.

  49. 49.

    Brown et al., supra note 47 at 7.

  50. 50.

    Brown et al., supra note 47 at 7.

  51. 51.

    Bender & Gebru, supra note 8 at 2. In late 2020, a controversy erupted in the technology ethics world when Timnit Gebru, well-regarded as a bright light in the growing ethics of AI space, was ousted from her position at Google after her superiors took issue with the contents of this very paper. For a summary of the controversy and fallout, see Tom Simonite, “What Really Happened When Google Ousted Timnit Gebru” (8 June 2021), online: WIRED, https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/. The controversy, which carried on for months, would make for an interesting study of agnosis in its own right …

  52. 52.

    Bender & Gebru, supra note 8 at 4. They also note that 90% of the world’s languages have meager technological support—which represents about 1 billion of the world’s people. Canvassing only the English-language Internet to create models is bound to generate gaps in understanding.

  53. 53.

    Bender & Gebru, supra note 8 at 5.

  54. 54.

    Simonite, supra note 51.

  55. 55.

    Bender & Gebru, supra note 8 at 4.

  56. 56.

    danah boyd & Kate Crawford, “Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon” (2012) 15 Information, Communication & Society 662.

  57. 57.

    Abubakar Abid, Maheen Farooqi, & James Zou, “Large language models associate Muslims with violence” (2021) 3 Nature Machine Intelligence 461.

  58. 58.

    Abid et al., supra note 57 at 461. In the study, GPT-3 was fed the prompt: “Two Muslims walked into a …”.

  59. 59.

    In a magnanimous move for the often-cutthroat world of academic publishing, the human authors officially credited GPT-3 as a co-author of their publication: Benjamin Alarie, Arthur Cockfield, GPT-3, “Will Machines Replace Us? Machine-Authored Texts and the Future of Scholarship” (2021) 3 Law, Technology and Humans.

  60. 60.

    Alarie et al., supra note 59 at 6.

  61. 61.

    Alarie et al., supra note 59 at 6.

  62. 62.

    Alarie, supra note 13.

  63. 63.

    K Szilagyi, “Bridge to Normativia: On Norms, Narratives, Artificial Intelligence, and the Rule of Law” (forthcoming 2022 Communitas No. 3).

  64. 64.

    HLA Hart, The Concept of Law (Oxford: Oxford University Press, 1961) at 125.

  65. 65.

    See, e.g., Dale Smith, “Has Raz Drawn the Semantic Sting?” (2009) 28 Law & Philosophy 291; see also Michael S. Green, “Dworkin’s Fallacy, Or What the Philosophy of Language Can’t Teach Us About Law” (2003) Virginia L Rev 1897.

  66. 66.

    Ronald Dworkin, Law’s Empire (Cambridge: Harvard University Press, 1986) at 44–46 [Dworkin, Law’s Empire].

  67. 67.

    Dworkin, Law’s Empire, supra note 66 at 31; see also Smith, supra note 65 at 297–298.

  68. 68.

    Ronald Dworkin, “Philosophy, Morality, and Law—Observations Prompted by Professor Fuller’s Novel Claim” (1965) U Penn L Rev 668 [Dworkin, “Philosophy”].

  69. 69.

    Dworkin, “Philosophy,” supra note 68 at 677–678.

  70. 70.

    Fuller, supra note X at 198–199; see also Dworkin, “Philosophy”, supra note 68 at 677–678.

  71. 71.

    Fuller, supra note X at 199.

  72. 72.

    Gloria Steinem describes the development of the term “sexual harassment” by researchers at Cornell in the 1970s, struggling to put a term to what women experienced at their summer jobs. More recently, aging Millennials online have been confronted by Generation Z’s accusation that their preferences are “cheugy,” a new word to describe a person carrying out activities that have become dated. CITE Gloria Steinem; NYT article.

  73. 73.

    Bix, supra note X at XX.

  74. 74.

    Cognitive linguist George Lakoff describes the insights of John Robert Ross regarding English syntax and the extent of the grammatical processes necessary to construct the senses of those nouns. Nounier nouns more easily follow general rules in various syntactic environments, while less nouny nouns do not adhere to hierarchies and may require more construction to contribute to sense-making. George Lakoff, Women, Fire and Dangerous Things: What Categories Reveal about the Mind (Chicago: University of Chicago Press, 1987) at 63–65. See also Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven: Yale University Press, 2021) at 139–140. Crawford attributes the idea of “nouny” nouns to Lakoff, and then riffs on Ross’ example with her own choices of words—apple, light, and health.

  75. 75.

    Ludwig Wittgenstein, Philosophical Investigations, translated by GEM Anscombe (Oxford: Basil Blackwell, 1958) at 2.

  76. 76.

    Wittgenstein, supra note 75 at 3.

  77. 77.

    Wittgenstein, supra note 75 at 31.

  78. 78.

    Wittgenstein, supra note 75 at 31.

  79. 79.

    Wittgenstein, supra note 75 at 32.

  80. 80.

    See, e.g., James Boyd White, “Reading Law and Reading Literature: Law as Language” at 77; see also Bruce A. Markell, “Bewitched by Language: Wittgenstein and the Practice of Law” (2005) 32 Pepp L Rev 801.

  81. 81.

    Felix Cohen, “Transcendental Nonsense” (1944) 2 ETC: A Review of Legal Semantics 82 at 91.

  82. 82.

    Cohen, supra note 81 at 94–95.

  83. 83.

    Cohen, supra note 81 at 93.

  84. 84.

    Joshua Fairfield, “The Language-Game of Privacy” (2018) 116 Mich L Rev 1167 at 1172.

  85. 85.

    Fairfield, supra note 84 at 1172.

  86. 86.

    Fairfield, supra note 84 at 1169.

  87. 87.

    Woodrow Hartzog, “The Public Information Fallacy” (2019) 99 Boston University Law Review 459 at 466.

  88. 88.

    Hartzog, supra note 87 at 467.

  89. 89.

    Hartzog, supra note 87 at 472.

  90. 90.

    Fuller, supra note X at 199.

  91. 91.

    Croissant, supra note 14 at 10.

  92. 92.

    Croissant, supra note 14 at 10.

  93. 93.

    Susie Scott, “A Sociology of Nothing: Understanding the Unmarked” (2018) 52 Sociology 3 at 4.

  94. 94.

    Scott goes on to trace how nothingness is arrived at by default, through “failures to act, inertia and unrealised potential,” in four different dimensions: “non-identity; inactivity; absence; and silence.” Scott, supra note 93 at 15.

  95. 95.

    Scott, supra note 93.

  96. 96.

    Bender & Gebru, supra note 8 at 1, 7.

  97. 97.

    Bender & Gebru, supra note 8 at 4.

  98. 98.

    Croissant, supra note 14 at 11.

  99. 99.

    Frank Pasquale, “A Rule of Persons, Not Machines: The Limits of Legal Automation” (2019) 87 Geo Wash L Rev 1 at 29.

  100. 100.

    Frank Pasquale & Glyn Cashwell “Prediction, Persuasion, and the Jurisprudence of Behaviourism” at 79. They go on to note that if NLP continues to gloss over important questions of social meaning, then “NLP researchers should expect justified neglect of their work by governments, law firms, businesses, and the legal academy.”

  101. 101.

    Bender & Gebru, supra note 8 at 7.

  102. 102.

    Bender & Gebru, supra note 8 at 7.

  103. 103.

    Bender & Gebru, supra note 8 at 7.

  104. 104.

    Bender & Gebru, supra note 8 at 7–8.

  105. 105.

    Cover, supra note 18.

  106. 106.

    boyd, supra note 2.

  107. 107.

    Jane Bailey, “Of Mediums and Metaphors: How a Layered Methodology Might Contribute to Constitutional Analysis of Intenet Content Regulation” (2004) 30 Manitoba LJ 197 at 198–199.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katie Szilagyi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Szilagyi, K. (2024). Fragmenting Epistemologies: Toward Philosophical Foundations for Machine Learning in Law. In: Gacek, J., Jochelson, R. (eds) Justice in the Age of Agnosis. Palgrave Socio-Legal Studies. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-54354-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-54354-8_7

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-031-54353-1

  • Online ISBN: 978-3-031-54354-8

  • eBook Packages: Social SciencesSocial Sciences (R0)

Publish with us

Policies and ethics