Skip to main content

AI’s Fast and Furtive Spread by Infusion into Technologies That Are Already in Use—A Critical Assessment

  • Chapter
  • First Online:
  • 826 Accesses

Part of the book series: Social and Cultural Studies of Robots and AI ((SOCUSRA))

Abstract

AI has often reached individuals covertly, rather than by their own choosing. Standard automatic version updates have enabled the infusion of AI in the form of deep learning into preexisting technologies such as mobile apps, websites, and software. All the most popular mobile apps, including YouTube, Facebook, and Snapchat, have been AI-infused. This has allowed deep learning algorithms to train on behavioral data from billions of individuals. Infusion contrasts conscious user adoption of standalone AI-technologies that depend on AI for their main functionality, including robot vacuums and smart home devices, for instance.

This chapter examines the relationship between infusion and AI’s ethical challenges. AI is a different type of technology than earlier innovations; it has well-known shortcomings that include unpredictability, inadequate transparency, unequal treatment, a lack of common sense, and a risk for user manipulation. Because of infusion, it seems that people who want to (continue to) use popular online platforms often do not have a real choice when it comes to AI-exposure, but this may come with a threat to social values such as equality, respect, and autonomy. It seems that an urgent AI-related problem right now is not that some general AI is manipulating us—but that a supplier of narrow AI may be able to.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    This framework is a synthesis of six international, expert-driven declarations on AI ethics: the Asilomar AI Principles (2017), developed by thought leaders and AI researchers from academia and industry; the Montréal Declaration for a Responsible AI Development (2017); the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2017); the Statement on AI, Robotic and ‘Autonomous’ Systems (2018) by the European Commission; the House of Lords Artificial Intelligence Committee in the U.K. (2017); and the Partnership on AI in San Francisco (2018), which represents academics, researchers, civil society organizations, and companies that develop AI technology.

  2. 2.

    While I focus on overlaps between bioethics and digital ethics, further connection between AI and the medical context can be explored by reading Datta Burton’s chapter.

  3. 3.

    Garvey’s Manifesto in this anthology refutes this idea that we should have AI at all, let alone our choice of exposure.

  4. 4.

    For further discussion of AI and the conundrum of responsibility, see Schwartz’s chapter in this collection.

References

  • An, Mimi. “Artificial Intelligence Is Here—People Just Don’t Realize It.” HubSpot Blog, published January 30, 2017, updated December 11, 2019, accessed March 14, 2021. https://blog.hubspot.com/news-trends/artificial-intelligence-is-here.

  • App Annie. “The Mobile Performance Standard.” App Annie, accessed November 29, 2019. https://www.appannie.com.

  • Asilomar AI Principles. “Principles Developed in Conjunction with the 2017 Asilomar Conference.” Future of Life Institute, 2017, accessed December 28, 2020. https://futureoflife.org/ai-principles.

  • BBC. “Facebook to Stop Recommending Civic and Political Groups.” BBC, January 28, 2021.

    Google Scholar 

  • Boerman, Sophie C., Sanne Kruikemeier, and Frederik J. Zuiderveen Borgesius. “Online Behavioral Advertising: A Literature Review and Research Agenda.” Journal of Advertising 46, no. 3 (2017): 363–376.

    Google Scholar 

  • Bosher, Hayleigh, and Sevil Yeşiloğlu. “An Analysis of the Fundamental Tensions Between Copyright and Social Media: The Legal Implications of Sharing Images on Instagram.” International Review of Law, Computers and Technology 33, no. 2 (2019): 164–186.

    Google Scholar 

  • Brown, Dalvin. “AI Bias: How Tech Determines if You Land Job, Get a Loan or End up in Jail.” USA Today, October 2, 2019.

    Google Scholar 

  • Cambridge Dictionary. “Meaning of AI in English.” Cambridge University Press, 2021.

    Google Scholar 

  • Chang, Daphne, Erin L. Krupka, Eytan Adar, and Alessandro Acquisti. “Engineering Information Disclosure: Norm Shaping Designs.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 587–597, 2016.

    Google Scholar 

  • Chaslot, Guillaume. Twitter Post. Replying to @gchaslot. February 9, 2019, 11:17 PM, accessed March 14, 2021. https://twitter.com/gchaslot/status/1094359568052817920.

  • Deloitte. “Smartphones are Useful, But They Can Be Distracting.” Deloitte, 2017, accessed July 10, 2020. https://www2.deloitte.com/content/dam/Deloitte/global/Images/infographics/technologymediatelecommunications/gx-deloitte-tmt-2018-smartphones-report.pdf.

  • Ekstrand, Michael D., Mucun Tian, Ion Madrazo Azpiazu, Jennifer D. Ekstrand, Oghenemaro Anuyah, David McNeill, and Maria Soledad Pera. “All the Cool Kids, How Do They Fit in?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness.” In Conference on Fairness, Accountability and Transparency, pp. 172–186. PMLR, 2018.

    Google Scholar 

  • Ekstrand, Michael D., Robin Burke, and Fernando Diaz. “Fairness and Discrimination in Retrieval and Recommendation.” In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1403–1404, 2019. Supplementary slides, accessed March 13, 2021. https://fair-ia.ekstrandom.net/sigir2019-slides.pdf.

  • Engström, Emma, and Pontus Strimling. “Deep Learning Diffusion by Infusion into Preexisting Technologies–Implications for Users and Society at Large.” Technology in Society 63 (2020): 101396.

    Google Scholar 

  • European Group on Ethics in Science and New Technologies. “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems.” Published March 9, 2018, accessed December 28, 2020. https://op.europa.eu/en/publication-detail/-/publication/dfebe62e-4ce9-11e8-be1d-01aa75ed71a1/language-en/format-PDF/source-78120382.

  • Fairfield, Joshua A. T., and Christoph Engel. “Privacy as a Public Good.” Duke Law Journal 65 (2015): 385.

    Google Scholar 

  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, et al. “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.” Minds and Machines 28, no. 4 (2018): 689–707.

    Google Scholar 

  • Floridi, Luciano. The Ethics of Information. Oxford University Press, 2013.

    Google Scholar 

  • Gerber, Nina, Paul Gerber, and Melanie Volkamer. “Explaining the Privacy Paradox: A Systematic Review of Literature Investigating Privacy Attitude and Behavior.” Computers and Security 77 (2018): 226–261.

    Google Scholar 

  • Gomez-Uribe, Carlos A., and Neil Hunt. “The Netflix Recommender System: Algorithms, Business Value, and Innovation.” ACM Transactions on Management Information Systems (TMIS) 6, no. 4 (2015): 1–19.

    Google Scholar 

  • Google. “Update the YouTube App.” 2021, accessed May 2, 2021. https://support.google.com/youtube/answer/7341336?co=GENIE.Platform%3Dandroid&hl=en.

  • Hannak, Aniko, Gary Soeller, David Lazer, Alan Mislove, and Christo Wilson. “Measuring Price Discrimination and Steering on e-Commerce Web Sites.” In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 305–318, 2014.

    Google Scholar 

  • Horwitz, Jeff, and Deepa Seetharaman. “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” The Wall Street Journal, May 26, 2020.

    Google Scholar 

  • House of Lords Artificial Intelligence Committee. “Artificial Intelligence Committee AI in the UK: Ready, Willing and Able?” Select Committee on Artificial Intelligence, published April 16, 2017, accessed December 28, 2020. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm.

  • IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “Ethically Aligned Design, v2.” 2017, accessed December 28, 2020. https://exploreaiethics.com/guidelines/ethically-aligned-design-v2/.

  • Kosinski, Michal, David Stillwell, and Thore Graepel. “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior.” Proceedings of the National Academy of Sciences 110, no. 15 (2013): 5802–5805.

    Google Scholar 

  • Krasnova, Hanna, Natasha F. Veltri, and Oliver Günther. “Self-Disclosure and Privacy Calculus on Social Networking Sites: The Role of Culture.” Business & Information Systems Engineering 4, no. 3 (2012): 127–135.

    Google Scholar 

  • Larsson, Linus. “Sociala medier döms ut som meningslösa.” (In Swedish.) Dagens Nyheter, October 10, 2019.

    Google Scholar 

  • Lee, MinHwa, JinHyo Joseph Yun, Andreas Pyka, DongKyu Won, Fumio Kodama, Giovanni Schiuma, HangSik Park, et al. “How to Respond to the Fourth Industrial Revolution, or the Second Information Technology Revolution? Dynamic New Combinations Between Technology, Market, and Society Through Open Innovation.” Journal of Open Innovation: Technology, Market, and Complexity 4, no. 3 (2018): 21.

    Google Scholar 

  • LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep Learning.” Nature 521, no. 7553 (2015): 436–444.

    Google Scholar 

  • Lombardo, Salvator, Jun Han, Christopher Schroers, and Stephan Mandt. 2019. “Deep Generative Video Compression.” In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. Alché-Buc, E. Fox, and R. Garnett. Vol. 32. Curran Associates, Inc., 2019.

    Google Scholar 

  • Metz, Cade. “AI Is Transforming Google Search. The Rest of the Web Is Next.” Wired Magazine, February, 4, 2016.

    Google Scholar 

  • Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society 3, no. 2 (2016): 2053951716679679.

    Google Scholar 

  • Montréal Declaration for a Responsible Development of Artificial Intelligence. “The Declaration.” Université de Montréal, 2017, accessed December 28, 2020. https://www.montrealdeclaration-responsibleai.com/the-declaration.

  • O’Callaghan, Derek, Derek Greene, Maura Conway, Joe Carthy, and Pádraig Cunningham. “Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems.” Social Science Computer Review 33, no. 4 (2015): 459–478.

    Google Scholar 

  • O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books, 2016.

    Google Scholar 

  • Parloff, Roger. “Why Deep Learning Is Suddenly Changing Your Life.” Fortune, September 28, 2016.

    Google Scholar 

  • Partnership on AI. “Tenets.” 2018, accessed December 28, 2020. https://www.partnershiponai.org/tenets.

  • Pasquale, Frank. The Black Box Society. Harvard University Press, 2015.

    Google Scholar 

  • Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. “Shifting Attention to Accuracy Can Reduce Misinformation Online.” Nature (2021): 1–6.

    Google Scholar 

  • Plantin, Jean-Christophe, Carl Lagoze, Paul N. Edwards, and Christian Sandvig. “Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook.” New Media and Society 20, no. 1 (2018): 293–310.

    Google Scholar 

  • Ribeiro, Manoel Horta, Raphael Ottoni, Robert West, Virgílio AF Almeida, and Wagner Meira Jr. “Auditing Radicalization Pathways on YouTube.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 131–141, 2020.

    Google Scholar 

  • Roose, Kevin. “The Making of a YouTube Radical.” The New York Times, June 8, 2019.

    Google Scholar 

  • Ryan, Richard M., and Edward L. Deci. “Self‐Regulation and the Problem of Human Autonomy: Does Psychology Need Choice, Self‐Determination, and Will?” Journal of Personality 74, no. 6 (2006): 1557–1586.

    Google Scholar 

  • Sætra, Henrik Skaug. “When Nudge Comes to Shove: Liberty and Nudging in the Era of Big Data.” Technology in Society 59 (2019): 101130.

    Google Scholar 

  • Schwab, Klaus. “The Fourth Industrial Revolution What It Means and How to Respond.” Foreign Affairs, December 12, 2015.

    Google Scholar 

  • Smith, Aaron, Toor Skye, and Patrick van Kessel. “Many Turn to YouTube for Children’s Content, News, How-To Lessons.” Pew Research, published November 7, 2018, accessed February 28, 2021. https://www.pewresearch.org/internet/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons.

  • Solsman, Joan. “YouTube’s AI Is the Puppet Master Over Most of What You Watch.” CNET, January 10, 2018.

    Google Scholar 

  • Susser, D., B. Roessler, and H. F. Nissenbaum. “Online Manipulation: Hidden Influences in a Digital World.” Georgetown Law Technology Review 4 (2019): 1.

    Google Scholar 

  • Steiner, Dirk D., Wanda A. Trahan, Dawn E. Haptonstahl, and Valérie Fointiat. “The Justice of Equity, Equality, and Need in Reward Distributions: A Comparison of French and American Respondents.” Revue internationale de psychologie sociale 19, no. 1 (2006): 49–74.

    Google Scholar 

  • Sweeney, Latanya. “Discrimination in Online Ad Delivery: Google Ads, Black Names and White Names, Racial Discrimination, and Click Advertising.” Queue 11, no. 3 (2013): 10–29.

    Google Scholar 

  • Vidal, Carol, Tenzin Lhaksampa, Leslie Miller, and Rheanna Platt. “Social Media Use and Depression in Adolescents: A Scoping Review.” International Review of Psychiatry 32, no. 3 (2020): 235–253.

    Google Scholar 

  • Waldman, Ari Ezra. “Cognitive Biases, Dark Patterns, and the ‘Privacy Paradox’.” Current Opinion in Psychology 31 (2020): 105–109.

    Google Scholar 

  • Weiss, Bari. “Meet the Renegades of the Intellectual Dark Web.” The New York Times, May 8, 2018.

    Google Scholar 

  • World Intellectual Property Organization (WIPO). “Technology Trends Artificial Intelligence.” Geneva: World Intellectual Property Organization, 2019, accessed December 10, 2019. https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf.

  • Xenidis, Raphaële, and Linda Senden. “EU Non-Discrimination Law in the Era of Artificial Intelligence: Mapping the Challenges of Algorithmic Discrimination.” In General Principles of EU Law and the EU Digital Order, edited by Ulf Bernitz, Xavier Groussot, Jaan Paju and Sybe A. De Vries, 151–182. Alphen aan den Rijn: Kluwer Law International, 2020.

    Google Scholar 

  • Wang, Yilun, and Michal Kosinski. “Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation from Facial Images.” Journal of Personality and Social Psychology 114, no. 2 (2018): 246.

    Google Scholar 

  • Xiao, Bo, and Izak Benbasat. “An Empirical Examination of the Influence of Biased Personalized Product Recommendations on Consumers’ Decision Making Outcomes.” Decision Support Systems 110 (2018): 46–57.

    Google Scholar 

  • Yeung, Karen. “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” Information, Communication & Society 20, no. 1 (2017): 118–136.

    Google Scholar 

  • Zuiderveen Borgesius, Frederik. “Discrimination, Artificial Intelligence, and Algorithmic Decision-Making.” Council of Europe, Directorate General of Democracy, 2018, accessed March 13, 2021. https://rm.coe.int/discrimination-artificial-intelligence-andalgorithmic-decision-making/1680925d73.

Download references

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emma Engström .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Engström, E. (2022). AI’s Fast and Furtive Spread by Infusion into Technologies That Are Already in Use—A Critical Assessment. In: Hanemaayer, A. (eds) Artificial Intelligence and Its Discontents. Social and Cultural Studies of Robots and AI. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-88615-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88615-8_4

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-030-88614-1

  • Online ISBN: 978-3-030-88615-8

  • eBook Packages: Social SciencesSocial Sciences (R0)

Publish with us

Policies and ethics