Abstract
What is meant by a cultural life of machine learning (ML), and how can it be studied? How can one keep up with the rapid pace of ML’s technical development, while simultaneously offering analyses that “generalize” for the next generation? We propose the necessity for an end-to-end sociology of contemporary ML/AI which trains itself on the entire cultural process of ML research, production, and deployment, from historical genesis to sociotechnical innovation to political impact and back again. Along with the other chapters in this volume, we discuss how ML historically developed its claim to capture meaning; how its present-day relevance is tied to an increasingly cascading agency; and why one must critically address ML’s limited forms of interpretation in order to ultimately alter its course.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Amoore, L. (2019). Doubt and the agorithm: On the partial accounts of machine learning. Theory, Culture & Society, 36(6), 147–169.
Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired Magazine, 16(7).
Apel, K.-O. (1971). Hermeneutik und Ideologiekritik. Suhrkamp.
Bauman, R., & Briggs, C. L. (1990). Poetics and performance as critical perspectives on language and social life. Annual Review of Anthropology,19, 59–88.
Benbouzid, B., & Cardon, D. (2018). Machines à prédire. Reseaux,211(5), 9–33.
Bengio, Y. (2016, February 17). Yoshua Bengio on intelligent machines. http://www.themindoftheuniverse.org/play?id=Yoshua_Bengio.
Biran, O., & Cotton, C. V. (2017). Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI).
Bloomfield, B. P. (1987). The question of artificial intelligence. Routledge.
Bourdieu, P. (1979). Public opinion does not exist. In A. Mattelart & S. Siegelaub (Eds.), Communication and class struggle (Vol. 1, pp. 124–130). International General/IMMRC.
Bowker, G., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press.
boyd, d., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.
Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science,16(3), 199–231. https://doi.org/10.1214/ss/1009213726.
Bubner, R. (1975). Theory and practice in the light of the hermeneutic-criticist controversy. Cultural Hermeneutics,2, 337–352. https://doi.org/10.1177/019145377500200408.
Bucher, T. (2016). Neither black nor box: Ways of knowing algorithms. In S. Kubitschko & A. Kaun (Eds.), Innovative methods in media and communication research (pp. 81–98). Palgrave Macmillan. https://doi.org/10.1007/978-3-319-40700-5_5.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency (pp. 77–91). http://proceedings.mlr.press/v81/buolamwini18a.html.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1)‚ 1–12. https://doi.org/10.1177/2053951715622512.
Capurro, R. (2010). Digital hermeneutics: An outline. AI & Society,25(1), 35–42. https://doi.org/10.1007/s00146-009-0255-9.
Cardon, D., Cointet, J.-P., & Mazières, A. (2018). Neurons spike back: The invention of inductive machines and the artificial intelligence controversy. Reśeaux,5(211), 173–220.
Castelle, M. (2013). Relational and non-relational models in the entextualization of bureaucracy. Computational Culture, 3.
Castelle, M. (2020). The social lives of generative adversarial networks. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (p. 413). https://doi.org/10.1145/3351095.3373156.
Cauchy, A. (1847). Méthode générale pour la résolution des systemes d’équations simultanées. Comptes Rendus de l’Académie des Sciences de Paris,25(1847), 536–538.
Cohen-Cole, J. (2005). The reflexivity of cognitive science: The scientist as model of human nature. History of the Human Sciences,18(4), 107–139. https://doi.org/10.1177/0952695105058473.
Crawford, K., & Paglen, T. (2019). Excavating AI: The politics of images in machine learning training sets. https://www.excavating.ai.
Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM,55(10), 78–87. https://doi.org/10.1145/2347736.2347755.
Elish, M. C., & boyd, d. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57–80. https://doi.org/10.1080/03637751.2017.1375130.
Espeland, W. N., & Stevens, M. L. (1998). Commensuration as a social process. Annual Review of Sociology,24(1), 313–343. https://doi.org/10.1146/annurev.soc.24.1.313.
Forsythe, D. E. (1993). Engineering knowledge: The construction of knowledge in artificial intelligence. Social Studies of Science,23(3), 445–477.
Foucault, M. (1970). The order of things: An archaeology of the human sciences. Vintage.
Gadamer, H.-G. (1975/2004). Truth and method. Continuum Books.
Gadamer, H.-G. (1977). Philosophical hermeneutics. University of California Press.
Geiger, R. S. (2017). Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717730735.
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80–89. https://doi.org/10.1109/DSAA.2018.00018.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial networks [Cs, Stat]. http://arxiv.org/abs/1406.2661.
Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples [Cs, Stat]. http://arxiv.org/abs/1412.6572.
Habermas, J. (1967/1988). On the logic of the social sciences. MIT Press.
Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production of algorithmic culture. New Media & Society,18(1), 117–137. https://doi.org/10.1177/1461444814538646.
Hekman, S. J. (1986). Hermeneutics and the sociology of knowledge. Polity.
Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115603080.
Holton, R., & Boyd, R. (2019). ‘Where are the people? What are they doing? Why are they doing it?’(Mindell) Situating artificial intelligence within a socio-technical framework. Journal of Sociology. https://doi.org/10.1177/1440783319873046.
Hongladarom, S. (2020). Machine hermeneutics, postphenomenology, and facial recognition technology. AI & Society. https://doi.org/10.1007/s00146-020-00951-x.
Introna, L. D. (2005). Disclosive ethics and information technology: Disclosing facial recognition systems. Ethics and Information Technology,7(2), 75–86.
Knight, W. (2016, January 26). Next big test for AI: Making sense of the world. MIT Technology Review. https://www.technologyreview.com/2016/01/26/163630/next-big-test-for-ai-making-sense-of-the-world/.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In P. L. Bartlett, F. C. N. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in neural information processing systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3–6, 2012, Lake Tahoe, Nevada, United States (pp. 1106–1114). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.
Lash, S. (2007). Power after hegemony: Cultural studies in mutation? Theory, Culture & Society, 24(3), 55–78. https://doi.org/10.1177/0263276407075956.
Latour, B. (1986). Visualisation and cognition: Drawing things together. In H. Kuklick (Ed.), Knowledge and society studies in the sociology of culture past and present (pp. 1–40). Jai Press.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature,521(7553), 436–444. https://doi.org/10.1038/nature14539.
Lipton, Z. C. (2016). The mythos of model interpretability [Cs, Stat]. http://arxiv.org/abs/1606.03490.
Lyons, J. (1995). Linguistic semantics: An introduction. Cambridge University Press.
Mackenzie, A. (2015). The production of prediction: What does machine learning want? European Journal of Cultural Studies,18(4–5), 429–445. https://doi.org/10.1177/1367549415577384.
Mackenzie, A. (2017). Machine learners: Archaeology of a data practice. MIT Press.
Mackenzie, A., & Munster, A. (2019). Platform seeing: Image ensembles and their invisualities. Theory, Culture & Society,36(5), 3–22. https://doi.org/10.1177/0263276419847508.
MacKenzie, D. (2006). An engine, not a camera: How financial models shape markets. MIT Press.
Manning, C. D. (2015). Computational linguistics and deep learning. Computational Linguistics,41(4), 701–707.
Mead, G. H. (1934). Mind, self & society from the standpoint of a social behaviorist. University of Chicago Press.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Proceedings of the 26th International Conference on Neural Information Processing Systems (Vol. 2, pp. 3111–3119). http://dl.acm.org/citation.cfm?id=2999792.2999959.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence,267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007.
Mitchell, M. (2018, November 5). Artificial intelligence hits the barrier of meaning. The New York Times. Opinion sec. https://www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html.
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 279–288). https://doi.org/10.1145/3287560.3287574.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with deep reinforcement learning [Cs]. http://arxiv.org/abs/1312.5602.
Natale, S., & Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence,26(1), 3–18. https://doi.org/10.1177/1354856517715164.
Orr, G. B., & Müller, K.-R. (Eds.). (1998). Neural networks: Tricks of the trade. Springer-Verlag.
Pasquinelli, M. (Ed.). (2015). Alleys of your mind: Augmented intelligence and its traumas. Meson Press.
Peirce, C. S. (1931). Collected papers of Charles Sanders Peirce (Vol. 2). Harvard University Press.
Powles, J., & Nissenbaum, H. (2018, December 7). The seductive diversion of “solving” bias in artificial intelligence. Medium. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53.
Pridmore, J., Zimmer, M., Vitak, J., Mols, A., Trottier, D., Kumar, P. C., & Liao, Y. (2019). Intelligent personal assistants and the intercultural negotiations of dataveillance in platformed households. Surveillance & Society, 17(1/2), 125–131. https://doi.org/10.24908/ss.v17i1/2.12936.
Radford, J., & Joseph, K. (2020). Theory in, theory out: The uses of social theory in machine learning for social science. Frontiers in Big Data, 3. https://doi.org/10.3389/fdata.2020.00018.
Reigeluth, T. (2018). Algorithmic prediction as a social activity. Réseaux, No 211(5), 35–67.
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W. (2020). Auditing radicalization pathways on YouTube. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 131–141). https://doi.org/10.1145/3351095.3372879.
Rieder, B. (2017). Scrutinizing an algorithmic technique: The Bayes classifier as interested reading of reality. Information, Communication & Society,20(1), 100–117. https://doi.org/10.1080/1369118X.2016.1181195.
Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds and Machines,29(4), 495–514. https://doi.org/10.1007/s11023-019-09509-3.
Roberge, J. (2011). What is critical hermeneutics? Thesis Eleven. https://doi.org/10.1177/0725513611411682.
Roberge, J., Senneville, M., & Morin, K. (2020). How to translate artificial intelligence? Myths and justifications in public discourse. Big Data & Society, 7(1). https://doi.org/10.1177/2053951720919968.
Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development,3(3), 210–229. https://doi.org/10.1147/rd.33.0210.
Schwartz, R. D. (1989). Artificial intelligence as a sociological phenomenon. The Canadian Journal of Sociology / Cahiers Canadiens de Sociologie,14(2), 179–202. https://doi.org/10.2307/3341290.
Seaver, N. (2015). The nice thing about context is that everyone has it. Media, Culture & Society, 37(7), 1101–1109. https://doi.org/10.1177/0163443715594102.
Seaver, N. (2019). Captivating algorithms: Recommender systems as traps. Journal of Material Culture,24(4), 421–436. https://doi.org/10.1177/1359183518820366.
Sejnowski, T. J. (2018). The deep learning revolution. MIT Press.
Sloane, M., & Moss, E. (2019). AI’s social sciences deficit. Nature Machine Intelligence,1(8), 330–331. https://doi.org/10.1038/s42256-019-0084-6.
Stark, L. (2019). Facial recognition is the plutonium of AI. XRDS: Crossroads, the ACM Magazine for Students, 25(3), 50–55. https://doi.org/10.1145/3313129.
Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science,48(1), 25–56. https://doi.org/10.1177/0306312717741687.
Strasser, B. J., & Edwards, P. N. (2017). Big Data is the answer … but what is the question? Osiris, 32(1), 328–345. https://doi.org/10.1086/694223.
Sudmann, A. (2018). On the media-political dimension of artificial intelligence: Deep learning as a black box and OpenAI. Digital Culture & Society, 4(1), 181–200. https://doi.org/10.14361/dcs-2018-0111.
van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208. https://doi.org/10.24908/ss.v12i2.4776.
Wertsch, J. V. (1983). The role of semiosis in L. S. Vygotsky’s theory of human cognition. In B. Bain (Ed.), The sociogenesis of language and human conduct. Plenum Press.
Woolgar, S. (1985). Why not a sociology of machines? The case of sociology and artificial intelligence. Sociology, 19, 557–572.
Zuboff, P. S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.
Zuckerberg, M. (2015, June 30). For the next hour I’ll be here answering your questions on Facebook [Facebook post]. Facebook. https://www.facebook.com/zuck/posts/10102213601037571?comment_id=10102213705932361&offset=0&total_comments=33321&comment_tracking=%7B%22tn%22%3A%22R9%22%7D.
Seaver, N. (2015). The nice thing about context is that everyone has it. Media, Culture & Society, 37(7), 1101-1109. https://doi.org/10.1177/0163443715594102.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s)
About this chapter
Cite this chapter
Roberge, J., Castelle, M. (2021). Toward an End-to-End Sociology of 21st-Century Machine Learning. In: Roberge, J., Castelle, M. (eds) The Cultural Life of Machine Learning. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-56286-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-56286-1_1
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-030-56285-4
Online ISBN: 978-3-030-56286-1
eBook Packages: Literature, Cultural and Media StudiesLiterature, Cultural and Media Studies (R0)