Skip to main content
Log in

15 challenges for AI: or what AI (currently) can’t do

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of applications of machine learning is embedded. And third, they are a consequence of current technical limitations in the development and use of artificial intelligence. The paper intends to provide an overview of current challenges which the research and development of applications in the field of artificial intelligence and machine learning have to face, whereas all three mentioned areas are to be further explored in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Akrich M (1995) User representations: practice, methods and sociology. In: Rip A, Misa TJ, Schot J (eds) Managing technology in society: the approach of constructive technology assessment. Pinter, London, pp 167–184

    Google Scholar 

  • Alcorn MA, Li Q, Gong Z, Wang C, Mai L, Ku W-S, Nguyen A (2018) Strike (with) a pose: neural networks are easily fooled by strange poses of familiar objects. arXiv:1811.11553

  • Amodei D, Olah C, Steinhardt J, Christiano P, Schulman J, Mané D (2017) Concrete problems in AI safety. arXiv:1606.06565

  • Ananny M, Crawford K (2017) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20:973–989

    Article  Google Scholar 

  • Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 18 Jan 2018

  • Barocas S, Selbst AD (2016) Big data’s disparate impact. Calif Law Rev 104:671–732

    Google Scholar 

  • Belliger A, Krieger DJ (2018) Network public governance: on privacy and the informational self. Transcript, Bielefeld

    Book  Google Scholar 

  • Bhattacharya A (2016) Elon Musk’s OpenAI is using Reddit to teach AI to speak like humans. https://qz.com/806321/open-ai-reddit-human-conversation/. Accessed 24 Oct 2018

  • Biggio B, Nelson B, Laskov P (2013) Poisoning attacks against support vector machines. arXiv:1206.6389

  • Boyd D, Crawford K (2012) Critical questions for big data: provocations for a cultural, technological, and scholarly phenomenon. Inf Commun Soc 15:662–679

    Article  Google Scholar 

  • Brey P (2010) Values in technology and disclosive computer ethics. In: Floridi L (ed) The Cambridge handbook of information and computer ethics. Cambridge University Press, Cambridge, pp 41–58

    Chapter  Google Scholar 

  • Brundage M, Avin S, Clark J, Toner H, Eckersley P, Garfinkel B, Dafoe A, Scharre P, Zeitziff T, Filar B, Anderson H, Roff H, Allen GC, Steinhardt J, Flynn C, hÉigeartaigh S, Beard S, Belfield H, Farquhar S, Lyle C, Crootof R, Evans O, Page M, Bryson J, Yampolskiy R, Amodei D (2018) The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv:1802.07228

  • Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. In: Sorelle AF, Wilson C (eds) Proceedings of machine learning research, 81st edn. PMLR, pp 1–15

  • Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc 3:1–12

    Article  Google Scholar 

  • Burton E, Goldsmith J, Koening S, Kuipers B, Mattei N, Walsh T (2017) Ethical considerations in artificial intelligence courses. Artif Intell Mag 38:22–36

    Google Scholar 

  • Campolo A, Sanfilippo M, Whittaker M, Crawford K (2017) AI now 2017 report. https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf. Accessed 2 Oct 2018

  • Carlini N, Wagner D (2018) Audio adversarial examples: targeted attacks on speech-to-text. arXiv:1801.01944

  • Casilli AA (2017) Digital labor studies go global: toward a digital decolonial turn. Int J Commun 11:1934–3954

    Google Scholar 

  • Clarke R (1994) The digital persona and its application to data surveillance. Inf Soc 10:77–92

    Article  Google Scholar 

  • Crawford K, Joler V (2018) Anatomy of an AI system. https://anatomyof.ai/. Accessed 6 Feb 2019

  • Cully A, Clune J, Tarapore D, Mouret J-B (2015) Robots that can adapt like animals. Nature 521:503–507

    Article  Google Scholar 

  • Datta A, Tschantz CM, Datta A (2015) Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination. Proc Priv Enhancing Technol 2015:92–112. arXiv:1408.6491

    Article  Google Scholar 

  • Diakopoulos N (2014) Algorithmic accountability reporting: on the investigation of black boxes. Tow Center for Digital Journalism, Columbia Academic Commons, pp 1–33

  • Domingos P (2012) A Few useful things to know about machine learning. Commun ACM 55:78–87

    Article  Google Scholar 

  • Dreyfuß H (1972) What computers can’t do: a critique of artificial reason. Harper & Row, New York

    Google Scholar 

  • Elgammal A, Liu B, Elhoseiny M, Mazzone M (2017) CAN: creative adversarial networks, generating “Art” by Learning about styles and deviating from style norms. arXiv:1706.07068

  • Elish MC, Boyd D (2017) Situating methods in the magic of big data and AI. Commun Monogr 85:1–24

    Google Scholar 

  • Engemann C (2018) Rekursionen über Körper: machine learning-trainingsdatensätze als Arbeit am Index. In: Engemann C, Sudmann A (eds) Machine learning—Medien, Infrastrukturen und Technologien der Künstlichen Intelligenz. Transcript, Bielefeld, pp 247–268

    Chapter  Google Scholar 

  • Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T, Song D (2018) Robust physical-world attacks on deep learning models. arXiv:1707.08945

  • Fang L (2019) Google hired gig economy workers to improve artificial intelligence in controversial drone-targeting project. https://theintercept.com/2019/02/04/google-ai-project-maven-figure-eight/. Accessed 13 Feb 2019

  • Fink M (2004) Object classification from a single example utilizing class relevance metrics. In: Advances in neural information processing systems 17 (NIPS 2004), pp 1–8

  • Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H, Daumeé H III, Crawford K (2018) Datasheets for datasets. arXiv:1803.09010

  • Gitelman L, Jackson V (2013) Introduction. In: Gitelman L (ed) “Raw Data” is an oxymoron. The MIT Press, Cambridge, pp 1–14

    Chapter  Google Scholar 

  • Goldsmith J, Burton E (2017) Why teaching ethics to AI practitioners is important. ACM SIGCAS Comput Soc 110–114

  • Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61:56–66

    Article  Google Scholar 

  • Goswami G, Ratha N, Agarwal A, Singh R, Vatsa M (2018) Unravelling robustness of deep learning based face recognition against adversarial attacks. arXiv:1803.00401

  • Haggerty KD, Ericson RV (2000) The surveillant assemblage. Br J Sociol 51:605–622

    Article  Google Scholar 

  • Hendrycks D, Dietterich TG (2018) Benchmarking neural network robustness to common corruptions and surface variations. arXiv:1807.01697

  • Horkheimer M (2007) Zur Kritik der instrumentellen Vernunft. Fischer Taschenbuch Verlag, Frankfurt

    Google Scholar 

  • Horkheimer M, Adorno TW (1947) Dialektik der Aufklärung: Philosophische Fragmente. Querido Verlag, Amsterdam

    Google Scholar 

  • Humphries C (2018) Digital immortality: how your life’s data means a version of you could live forever. https://www.technologyreview.com/s/612257/digital-version-after-death/. Accessed 26 Oct 2018

  • Hyysalo S (2016) Representations of use and practice-bound imaginaries in automating the safety of the elderly. Soc Stud Sci 36:599–626

    Article  Google Scholar 

  • Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C, Li B (2018) Manipulating machine learning: poisoning attacks and countermeasures for regression learning. arXiv:1804.00308

  • Jia R, Liang P (2017) Adversarial examples for evaluating reading comprehension systems. arXiv:1707.07328

  • Jiang L, Stocco A, Losey DM, Abernethy JA, Prat CS, Rao RPN (2018) BrainNet: a multi-person brain-to-brain interface for direct collaboration between brains

  • Kansky K, Silver D, Mély DA, Eldawy M, Lázaro-Gredilla M, Lou X, Dorfman N, Sidor S, Phoenix S, George D (2017) Schema networks: zero-shot transfer with a generative causal model of intuitive physics. arXiv:1706.04317

  • Karas S (2004) Loving big brother. Albany Law J Sci Technol 15:607–637

    Google Scholar 

  • Khan B, Gawalt JR, Cook FL (2016) Science & engineering indicators. National Science Foundation, pp 1–893

  • Kitchin R (2014) Thinking critically about and researching algorithms. The Programmable City Working Paper, vol 5, pp 1–29

  • Kitchin R, Lauriault TP (2015) Small data in the era of big data. GeoJournal 80:463–475

    Article  Google Scholar 

  • Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ (2016) Building machines that learn and think like people. Behav Brain Sci 40:1–58. arXiv:1604.00289

    Google Scholar 

  • Lehman J, Clune J, Misevic D, Adami C, Altenberg L, Beaulieu J, Bentley PJ, Bernard S, Beslon G, Bryson DM, Chrabaszcz P, Cheney N, Cully A, Doncieux S, Dyer FC, Ellefsen KO, Feldt R, Fischer S, Forrest S, Frénoy A, Gagné C, Le Goff L, Grabowski LM, Hodjat B, Hutter F, Keller L, Knibbe C, Krcah P, Lenski RE, Lipson H, MacCurdy R, Maestre C, Miikkulainen R, Mitri S, Moriarty DE, Mouret J-B, Nguyen A, Ofria C, Parizeau M, Parsons D, Pennock RT, Punch WF, Ray TS, Schoenauer M, Shulte E, Sims K, Stanley KO, Taddei F, Tarapore D, Thibault S, Weimer W, Watson R, Yosinski J (2018) The surprising creativity of digital evolution: a collection of anecdotes from the evolutionary computation and artificial life research communities. arXiv:1803.03453

  • Levin S (2016) A beauty contest was judged by AI and the robots didn’t like dark skin. https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people. Accessed 10 Sept 2016

  • Li X, Hong X, Moilanen A, Huang X, Pfister T, Zhao G, Pietikäinen M (2018) Towards reading hidden emotions: a comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Trans Affect Comput 9:563–577

    Article  Google Scholar 

  • Litjens GJS, Barentsz JO, Karssemeijer N, Huisman HJ (2015) Clinical evaluation of a computer-aided diagnosis system for determining cancer aggressiveness in prostate MRI. Eur Radiol 25:3187–3199

    Article  Google Scholar 

  • Los M (2006) Looking into the future: surveillance, globalization and the totalitarian potential. In: Lyon D (ed) Theorizing surveillance: the panopticon and beyond. Willan Publishing, Cullompton, pp 69–94

    Google Scholar 

  • Lyon D (2003) Surveillance as social sorting: computer codes and mobile bodies. In: Lyon D (ed) Surveillance as social sorting: privacy, risk, and digital discrimination. Routledge, London, pp 13–30

    Google Scholar 

  • Marcus G (2018) Deep learning: a critical appraisal. arXiv:1801.00631

  • Markow W, Braganza S, Taska B, Miller SM, Hughes D (2017) The quant crunch: how the demand for data science skills is disrupting the job market. Burning Glass Technologies, pp 1–25

  • Metz C (2017) Tech giants are paying huge salaries for scarce A.I. talent. https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html. Accessed 25 Feb 2019

  • Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Soc 3:1–21

    Article  Google Scholar 

  • Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency—FAT* ’19, pp 1–10

  • Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518:529–533

    Article  Google Scholar 

  • Mori M (2012) The uncanny valley. Robot Autom Mag IEEE 19:98–100

    Article  Google Scholar 

  • O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Publishers, New York

    MATH  Google Scholar 

  • Öhman C, Floridi L (2018) An ethical framework for the digital afterlife industry. Nat Hum Behav 2:318–320

    Article  Google Scholar 

  • Oudshoorn N, Rommes E, Stienstra M (2016) Configuring the user as everybody: gender and design cultures in information and communication technologies. Sci Technol Hum Values 29:30–63

    Article  Google Scholar 

  • Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge

    Book  Google Scholar 

  • Raley R (2013) Dataveillance and counterveillance. In: Gitelman L (ed) “Raw Data” is an oxymoron. The MIT Press, Cambridge, pp 121–146

    Google Scholar 

  • Ribeiro MT, Singh S, Guestrin C (2016) Introduction to local interpretable model-agnostic explanations (LIME). https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime. Accessed 26 Oct 2018

  • Rosenfeld A, Zemel R, Tsotsos JK (2018) The elephant in the room. arXiv:1808.03305

  • Rouvroy A (2013) The end(s) of critique: data behaviourism versus due process. In: Hildebrandt M, Vries K de (eds) Privacy, due process and the computational turn: the philosophy of law meets the philosophy of technology. Routledge, Abingdon, pp 143–168

    Google Scholar 

  • Rusu AA, Rabinowitz NC, Desjardins G, Soyer H, Kirkpatrick J, Kavukcuoglu K, Pascanu R, Hadsell R (2016) Progressive neural networks. arXiv:1606.04671

  • Samek W, Binder A, Montavon G, Bach S, Müller K-R (2015) Evaluating the visualization of what a deep neural network has learned. arXiv:1509.06321

  • Schneier B (2018) Click here to kill everybody. W. W. Norton & Company, New York

    Google Scholar 

  • Shillingford B, Assael Y, Hoffman MW, Paine T, Hughes C, Prabhu U, Liao H, Sak H, Rao K, Bennett L, Mulville M, Coppin B, Laurie B, Senior A, Freitas Nd (2018) Large-scale visual speech recognition. arXiv:1807.05162

  • Shoham Y, Perrault R, Brynjolfsson E, Clark J, Manyika J, Niebles JC, Lyons T, Etchemendy J, Grosz B, Bauer Z (2018) The AI Index 2018 Annual Report, pp 1–94

  • Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D (2017) Mastering the game of go without human knowledge. Nature 550:354–359

    Article  Google Scholar 

  • Simonite T (2016) How google plans to solve artificial intelligence. https://www.technologyreview.com/s/601139/how-google-plans-to-solve-artificial-intelligence/. Accessed 18 Oct 2018

  • Sitawarin C, Bhagoji AN, Mosenia A, Chiang M, Mittal P (2018) DARTS: deceiving autonomous cars with toxic signs. arXiv:1802.06430

  • Steffen W, Rockström J, Richardson K, Lenton TM, Folke C, Liverman D, Summerhayes CP, Barnosky AD, Cornell SE, Crucifix M, Donges JF, Fetzer I, Lade SJ, Scheffer M, Winkelmann R, Schellnhuber HJ (2018) Trajectories of the earth system in the anthropocene. Proc Natl Acad Sci USA 115:8252–8259

    Article  Google Scholar 

  • Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199

  • Tramér F, Zhang F, Juels A, Reiter MK, Ristenpart T (2016) Stealing machine learning models via prediction APIs. In: Proceedings of the 25th USENIX security symposium, Austin, pp 601–618

  • Whittaker M, Crawford K, Dobbe R, Fried G, Kaziunas E, Mathur V, Myers West S, Richardson R, Schultz J, Schwartz O (2018) AI now report 2018. AI Now, pp 1–62

  • Wiens J, Guttag J, Horvitz E (2014) A study in transfer learning: leveraging data from multiple hospitals to enhance hospital-specific predictions. J Am Med Inform Assoc 21:699–706

    Article  Google Scholar 

  • Wilkens A (2018) Frauenanteil an Informatik-Studienanfängern stagniert. https://www.heise.de/newsticker/meldung/Frauenanteil-an-Informatik-Studienanfaengern-stagniert-4035140.html. Accessed 24 Oct 2018

  • Williams RM, Yampolskiy RV (2018) Optical illusions images dataset. arXiv:1810.00415

  • Wu X, Zhang X (2016) Responses to critiques on machine learning of criminality perceptions (addendum of arXiv:1611.04135). arXiv:1–14

  • Yuan L (2018) How cheap labor drives China’s A.I. ambitions. https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-labeling.html. Accessed 30 Nov 2018

  • Zhao M, Li T, Alsheikh MA, Tian Y, Zhao H, Torralba A, Katabi D (2018) Through-wall human pose estimation using radio signals. In: Computer vision and pattern recognition (CVPR), pp 7356–7365

Download references

Acknowledgements

This research was supported by the Cluster of Excellence “Machine Learning—New Perspectives for Science” funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—Reference number EXC 2064/1—Project ID 390727645.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thilo Hagendorff.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hagendorff, T., Wezel, K. 15 challenges for AI: or what AI (currently) can’t do. AI & Soc 35, 355–365 (2020). https://doi.org/10.1007/s00146-019-00886-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-019-00886-y

Keywords

Navigation