Skip to main content
Log in

In the Frame: the Language of AI

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

In this article, drawing upon a feminist epistemology, we examine the critical roles that philosophical standpoint, historical usage, gender, and language play in a knowledge arena which is increasingly opaque to the general public. Focussing on the language dimension in particular, in its historical and social dimensions, we explicate how some keywords in use across artificial intelligence (AI) discourses inform and misinform non-expert understandings of this area. The insights gained could help to imagine how AI technologies could be better conceptualised, explained, and governed, so that they are leveraged for social good.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Australian Broadcasting Corporation (ABC) reporting throughout 2019 struggled to express clearly and succinctly what rules are being applied; and this is part of the problem, as it is hard to determine whether there is an AI process involved. A senior partner in the law firm managing the class action to challenge the legality of the Robodebt policy states that “to simply collect money from hundreds of thousands of people by the simplistic application of an imperfect computer algorithm is wrong.” (ABC News 2019).

  2. Succinctly put by Didier et al. in Science, in response to a set of articles published in the same journal: “Given the grievous shortcomings of national governance and the even weaker capacities of the international system, it is dangerous to invest heavily in AI without political processes in place that allow those who support and oppose the technology to engage in a fair debate.” (Didier et al. 2015: 1064); on the dangers for medicine see e.g. Cabitza et al. (2017); see also Challen et al. (2019).

  3. We note that some recent presentations such as a Finnish online AI course make the point that “words can be misleading” quite explicitly (Roos 2019).

  4. For quick illustration, this kind of thinking appears in one of the early histories of modern computing, where the author, an engineer turned historian, refers to Rene Descartes’ “discovery” of analytical geometry (rather than e.g. “invention”) (Goldstine 1993: 13).

  5. This porous boundary between humans and data is also a key theme in the broader history of computing through the rise of the concept of “user experience design” (UX), from Joint Application Development (Wood and Silver 1995; see also Sumner 2014: 310–311), through to user-centred design (Okolloh 2009), and finally a web 2.0 model whereby end-users are now themselves used as cogs in the machine, whose value is in the marketability of their data and the predictability of their behaviour. The continued employment of the term user disguises this lack of agency and points to a disconnect between the understanding of computing professionals and the general public.

  6. For a brief period before the term “memory” became ubiquitous, a literal term “store” was used to refer to the main or immediate data store for a program.

  7. Sumner (2014: 327 n8&9) cites e.g. “An Electronic Brain”, The Times, 1 November 1946, 2; “A New Electronic ‘Brain’: Britain Goes One Better”, Manchester Guardian, 7 November 1946, 8. Perhaps one of the sources for the brain metaphor was the enormously popular novels of H. G. Wells, which frequently refer to the human brain and possible substitutes for, and extensions of, it.

  8. “I have tried to write this book so that it could be understood. I have attempted to explain machinery for computing and reasoning without using technical words any more than necessary.” (Berkeley 1949: ix); Berkeley’s interest in explanations for the general public is also reflected in his A Guide to Mathematics for the Intelligent Nonmathematician (1966).

  9. The term “practitioners” is used loosely to refer to those with practical interest in computers and theorising from that interest, as opposed to Gilbert Ryle for example.

  10. Von Neumann remarks on the lack of a theory of computers, which at this point he generalises as “automata” (1958: 2).

  11. Aspray (1990: 239) quotes part of a very interesting letter by von Neumann written in the middle of 1947 casting light on his aims and intended audience, in which he expresses the wish that practitioners would spend more time writing about the “whole ‘art’ of setting up complicated computing machines, of programming and of coding.” Von Neumann goes on to say that “The sooner we lay the foundation for a ‘literary’ method by which the properties and potentialities of the ENIAC can be made known to the general scientific public, the better.”

  12. Perhaps the real origin of the possibility of artificial intelligence lies with Babbage; but throughout the twentieth century, the interpretation of Babbage’s work was a wicked problem, largely because, as Bullock (2008: 36) points out, a naive “Whiggish reinterpretation” prevailed; Bullock analyses the well-known but not well-understood crucial text The Ninth Bridgewater Treatise as a description of a “simulation model” (22).

  13. Note that Burges (2010: 2) himself says “a learned probability” for what is a calculation—a function within the capacity of someone with high school mathematics to understand (the first formula on page 2).

References

  • Abbiss, J. (2011). Boys and machines: gendered computer identities, regulation and resistance. Gender and Education, 23, 601–617.

    Google Scholar 

  • ABC News. (2019). Centrelink robodebt class action lawsuit to be brought against federal government. 17 September 2019. https://www.abc.net.au/news/2019-09-17/centrelink-robodebt-class-action-lawsuit-announced/11520338.

  • Adam, A. (1995). Artificial intelligence and women’s knowledge: what can feminist epistemologies tell us? Women’s Studies International Forum, 18(4), 407–415. https://doi.org/10.1016/0277-5395(95)80032-K.

    Article  Google Scholar 

  • Alexander, P. A., Schallert, D. L., & Reynolds, R. E. (2009). What is learning anyway? A topographical perspective considered. Educational Psychologist, 44(3), 176–192.

    Google Scholar 

  • Alpaydin, E. (2014). Introduction to machine learning (3rd ed.). Cambridge, Massachusetts: MIT Press.

    Google Scholar 

  • Aspray, W. (1990). John von Neumann and the origins of modern computing. Cambridge, Massachusetts: MIT Press.

    Google Scholar 

  • Austin, J. L. (1962). How to do things with words. London: Oxford University Press.

    Google Scholar 

  • Berendt, B. (2019). AI for the common good?! pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics, 10, 44–65.

  • Berkeley, E. (1949). Giant brains, or machines that think. New York: Wiley & Sons.

    Google Scholar 

  • Berkeley, E. (1966). A guide to mathematics for the intelligent nonmathematician. New York: Simon & Schuster.

    Google Scholar 

  • Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & Technology, 31, 543–556.

  • Broad, E. (2019). Computer says no: being a woman in tech. Griffith Review, 64, 72–82.

    Google Scholar 

  • Buch, V. H., Ahmed, I., & Maruthappu, M. (2018). Artificial intelligence in medicine: current trends and future possibilities. British Journal of General Practice, 68(668), 143–144.

  • Burges, C. J. (2010). From Ranknet to LambdaRank to LambdaMART: An overview. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf.

  • Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., & Hullender, G. (2005). Learning to rank using gradient descent. In Proceedings of the 22nd international conference on machine learning (pp. 89–96). New York, NY: Association for Computing Machinery.

    Google Scholar 

  • Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517–518.

    Google Scholar 

  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.

    Google Scholar 

  • Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28, 231–237.

  • Collett, C. & Dillon, S. (2019). AI and gender: four proposals for future research. Cambridge: The Leverhulme Centre for the Future of Intelligence. http://lcfi.ac.uk/media/uploads/files/AI_and_Gender_4_Proposals_for_Future_Research_yaApTTR.pdf.

  • Crystal, D. (2015). The lure of words. In J. Taylor (Ed.), The Oxford handbook of the word (pp. 23–28). Oxford: Oxford University Press.

    Google Scholar 

  • D’Ignazio, C., & Klein, L. (2019). Data feminism. Manuscript draft, MIT Press Open. https://bookbook.pubpub.org/data-feminism.

  • Davies, M. (2004). British National Corpus (from Oxford University press). https://www.english-corpora.org/bnc/.

  • Davies, M. (2007). TIME Magazine Corpus (100 million words, 1920s–2000s). corpus.byu.edu/time.

  • Davies, M. (2013). Corpus of Global Web-Based English: 1.9 billion words from speakers in 20 countries (GloWbE). https://www.english-corpora.org/glowbe/.

  • Didier, C., Duan, W., Dupuy, J.-P., Guston, D. H., Liu, Y., López Cerezo, J. A., et al. (2015). Acknowledging AI's dark side. Science, 349(6252), 1064–1065.

    Google Scholar 

  • Drucker, J. (2010). Data as capta. Au, St. Gallen: Druckwerk.

    Google Scholar 

  • Drucker, J. (2011). Humanities approaches to graphical display. Digital Humanities Quarterly, 5(1).

  • Edwards, D. (1997). Discourse and cognition. London: Sage.

    Google Scholar 

  • Ensmenger, N. (2004). Power to the people: toward a social history of computing. IEEE Annals of the History of Computing, 26(1), 94–96.

    Google Scholar 

  • Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26, 1771–1796. https://doi.org/10.1007/s11948-020-00213-5.

    Article  Google Scholar 

  • Goldstine, H. H. (1993). The computer: from Pascal to von Neumann. Princeton: Princeton University Press.

    Google Scholar 

  • Gookin, D. (2013). PCs for dummies. Hoboken: John Wiley & Sons.

    Google Scholar 

  • Haviland, W. A., Prins, H. E., & McBride, B. (2013). Anthropology: the human challenge. Boston: Cengage Learning.

    Google Scholar 

  • Hicks, M. (2017). Programmed inequality: how Britain discarded women technologists and lost its edge in computing. Cambridge, MA: MIT Press.

    Google Scholar 

  • Ihde, D. (2004). Has the philosophy of technology arrived? A state-of-the-art review. Philosophy of Science, 71(1), 117–131.

    Google Scholar 

  • Iliffe, R. (2008). History of science. archives.history.ac.uk/makinghistory/resources/History_of_Science_fullversion.pdf.

  • Joos, I., Nelson, R., & Wolf, D. (2019). Introduction to computers for healthcare professionals. Burlington: Jones & Bartlett Publishers.

  • Kukutai, T., & Taylor, J. (Eds.). (2016). Indigenous data sovereignty: toward an agenda (Vol. 38). Canberra: ANU Press.

    Google Scholar 

  • Leavy, S. (2018). Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, ACM, pp. 14–16.

  • Levinson, S. C. (1997). Language and cognition: The cognitive consequences of spatial description in Guugu Yimithirr. Journal of Linguistic Anthropology, 7(1), 98–131.

    Google Scholar 

  • Leybourn, T. (1802). A synopsis of data for the construction of triangles. London: Glendinning.

  • Lie, M. (1995). Technology and masculinity: The case of the computer. European Journal of Women's Studies, 2(3), 379–394.

    Google Scholar 

  • Lindqvist, A., Renström, E. A., & Sendén, M. G. (2019). Reducing a male bias in language? Establishing the efficiency of three different gender-fair language strategies. Sex Roles, 81(1–2), 109–117.

    Google Scholar 

  • Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2019). Fairness through causal awareness: learning causal latent-variable models for biased data. In Proceedings of the conference on fairness, accountability, and transparency (pp. 349–358). New York, NY: Association for Computing Machinery.

    Google Scholar 

  • Margolis, J., & Fisher, A. (2002). Unlocking the clubhouse: women in computing. Cambridge, MA: MIT Press.

    Google Scholar 

  • Mendick, H., & Moreau, M.-P. (2013). New media, old images: constructing online representations of women and men in science, engineering and technology. Gender and Education, 25(3), 325–339.

    Google Scholar 

  • Mitchell, T. M. (1997). Machine learning. New York: McGraw Hill.

    Google Scholar 

  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220–229). New York, NY: Association for computing machinery.

  • Niu, H., Keivanloo, I., & Zou, Y. (2017). Learning to rank code examples for code search engines. Empirical Software Engineering, 22(1), 259–291.

    Google Scholar 

  • Okolloh, O. (2009). Ushahidi, or ‘testimony’: Web 2.0 tools for crowdsourcing crisis information. Participatory Learning and Action, 59(1), 65–70.

    Google Scholar 

  • Oldenziel, R. (1999). Making technology masculine: men, women and modern machines in America. Amsterdam: Amsterdam University Press.

    Google Scholar 

  • Pechtelidis, Y., Kosma, Y., & Chronaki, A. (2015). Between a rock and a hard place: Women and computer technology. Gender and Education, 27(2), 1–19.

    Google Scholar 

  • Powles, J., with Nissenbaum, H. (2018). The seductive diversion of “solving” bias in artificial intelligence. Medium. 8 December 2018. https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53.

  • Quong, J. (2013). Public reason. The Stanford Encyclopedia of Philosophy (Summer 2013 edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/sum2013/entries/public-reason/.

  • Roos, T. (2019). Elements of AI. Online Course. Reaktor & University of Helsinki. https://www.elementsofai.com.

  • Roth, L. (2013). The fade-out of Shirley, a once-ultimate norm: colour balance, image technologies, and cognitive equity. In R. E. Hall (Ed.), The melanin millennium (pp. 273–286). Dordrecht: Springer.

    Google Scholar 

  • Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a modern approach. Upper Saddle River: Pearson.

    Google Scholar 

  • Russo, S. (1957). Data vs. Capta or Sumpta. American Psychologist, 12(5), 283–284.

    Google Scholar 

  • Shi, Z. R., Wang, C., & Fang, F. (2020). Artificial intelligence for social good: A survey. Ground AI. https://www.groundai.com/project/artificial-intelligence-for-social-good-a-survey/1.

  • Shieber, S. M., ed. (2004). The Turing test: verbal behavior as the hallmark of intelligence. Cambridge, MA: MIT Press.

  • Steels, L. (2007). Fifty years of AI: from symbols to embodiment---and back. In 50 years of artificial intelligence (pp. 18–28). Berlin, Heidelberg: Springer.

    Google Scholar 

  • Sumner, J. (2014). Defiance to compliance: visions of the computer in postwar Britain. History and Technology, 30(4), 309–333.

    Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

    Google Scholar 

  • Turkle, S. (2005). The second self: computers and the human spirit. Cambridge, MA: MIT Press.

    Google Scholar 

  • Von Neumann, J. (1958). The computer and the brain. Yale University Press.

  • Wempen, F. (2014). Computing fundamentals: Introduction to computers. Hoboken: John Wiley & Sons.

    Google Scholar 

  • West, S. M. Whittaker, M. & Crawford, K. (2019). Discriminating systems: gender, race and power in AI. White Paper. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html.

  • Wood, J., & Silver, D. (1995). Joint application development. New York: John Wiley & Sons.

    Google Scholar 

  • Xu, J., Zhou, S., Chen, H., & Li, P. (2015). A sample partition method for learning to rank based on query-level vector extraction. In 2015 International Joint Conference on Neural Networks (IJCNN) (pp. 1–7). IEEE.

  • Zarkadakis, G. (2015). In our own image: will artificial intelligence save or destroy us? New York: Random House.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rachel Hendery.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bones, H., Ford, S., Hendery, R. et al. In the Frame: the Language of AI. Philos. Technol. 34 (Suppl 1), 23–44 (2021). https://doi.org/10.1007/s13347-020-00422-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-020-00422-7

Keywords

Navigation