The trouble with seasonal metaphors is that they are cyclical. If you say that artificial intelligence (AI) got through a bad winter, you must also remember that winter will return, and you better be ready. An AI winter is that stage when technology, business, and the media get out of their warm and comfortable bubble, cool down, temper their sci-fi speculations and unreasonable hypes, and come to terms with what AI can or cannot really do as a technology (Floridi 2019), without exaggeration. Investments become more discerning, and journalists stop writing about AI, to chase some other fashionable topics and fuel the next fad.
AI has had several winters.Footnote 1 Among the most significant, there was one in the late 1970s, and another at the turn of the 1980s and 1990s. Today, we are talking about another predictable winter (Nield 2019; Walch 2019; Schuchmann 2019).Footnote 2 AI is subject to these hype cycles because it is a hope or fear that we have entertained since we were thrown out of paradise: something that does everything for us, instead of us, better than us, with all the dreamy advantages (we shall be on holiday forever) and the nightmarish risks (we are going to be enslaved) that this entails. For some people, speculating about all this is irresistible. It is the wild west of “what if” scenarios. But I hope the reader will forgive me for a “I told you so” moment. For some time, I have been warning against commentators and “experts”, who were competing to see who could tell the tallest tale (Floridi 2016). A web of myths ensued. They spoke of AI as if it were the ultimate panacea, which would solve everything and overcome everything; or as the final catastrophe, a superintelligence that would destroy millions of jobs, replacing lawyers and doctors, journalists and researchers, truckers and taxi drivers, and ending by dominating human beings as if they were pets at best. Many followed Elon Musk in declaring the development of AI the greatest existential risk run by humanity. As if most of humanity did not live in misery and suffering. As if wars, famine, pollution, global warming, social injustice, and fundamentalism were science fiction, or just negligible nuisances, unworthy of their considerations. They insisted that law and regulations were always going to be too late and never catch up with AI, when in fact norms are not about the speed but about the direction of innovation, for they should steer the proper development of a society (if we like where we are heading, we cannot go there quickly enough). Today, we know that legislation is coming, at least in the EU. They claimed AI was a magic black box we could never explain, when in fact it is a matter of the correct level of abstraction at which to interpret the complex interactions engineered—even car traffic downtown becomes a black box if you wish to know why every single individual is there at that moment. Today there is a growing development of adequate tools to monitor and understand how machine learning systems reach their outcomes (Watson and Floridi forthcoming). They spread scepticism about the possibility of an ethical framework that would synthesize what we mean by socially good AI, when in fact the EU, the OECD, and China have converged on very similar principles that offer a common platform for further agreements (Floridi and Cowls 2019). Sophists in search of headlines. They should be ashamed and apologize. Not only for their untenable comments, but also for the great irresponsibility and alarmism, which have misled public opinion both about a potentially useful technology—that could provide helpful solutions, from medicine to security and monitoring systems (Taddeo and Floridi 2018)—and about the real risks—which we know are concrete but so much less fancy, from everyday manipulation of choices (Milano et al. 2019) to increased pressure on individual and group privacy (Floridi 2014), from cyberconflicts to the use of AI by organized crime for money laundering and identity theft (King et al. 2020).
The risk of every AI summer is that over-inflated expectations turn into a mass distraction. The risk of every AI winter is that the backlash is excessive, the disappointment too negative, and potentially valuable solutions are thrown out with the water of the illusions. Managing the world is an increasingly complex task: megacities and their “smartification” offer a good example. And we have planetary problems—such as global warming, social injustice, and migration—which require ever higher degrees of coordination to be solved. It seems obvious that we need all the good technology that we can design, develop, and deploy to cope with these challenges, and all human intelligence we can exercise to put this technology in the service of a better future. AI can play an important role in all this because we need increasingly smarter ways of processing immense quantities of data, sustainably and efficiently. But AI must be treated as a normal technology, neither as a miracle nor as a plague, and as one of the many solutions that human ingenuity has managed to devise. This is also why the ethical debate about AI’s remains for ever an entirely human question.
Now that a new AI winter is coming, we may try to learn some lessons, and avoid this yo-yo of unreasonable illusions and exaggerated disillusions. Let us not forget that the winter of AI should not be the winter of its opportunities. It certainly will not be the winter of its risks or challenges. We need to ask ourselves whether AI solutions are really going to replace previous solutions—as the automobile has done with the carriage—diversify them—as did the motorcycle with the bicycle—or complement and expand them—as the digital smart watch has done with the analog one. What will the level of social acceptability or preferability be of whatever AI will survive the new winter? Are we really going to be wearing some kind of strange glasses to live in a virtual or augmented world created by AI? Consider that today many people are reluctant to wear glasses even when they seriously need them, just for aesthetic reasons. And then, are there feasible AI solutions in everyday life? Are the necessary skills, datasets, infrastructure, and business models in place to make an AI application successful? The futurologists find these questions boring. They like a single, simple idea, which interprets and changes everything, that can be spread thinly across an easy book that makes the reader feel intelligent, a book to be read by everyone today and ignored by all tomorrow. It is the bad diet of junk fast-food for thoughts and the curse of the airport bestseller. We need to resist oversimplification. This time let us think more deeply and extensively on what we are doing and planning with AI. The exercise is called philosophy, not futurology.
Floridi, L. (2014). Open data, data protection, and group privacy. Philosophy & Technology, 27(1), 1–3.
Floridi, L. (2016) Should we be afraid of AI. Aeon Essays. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible.
Floridi, L. (2019). What the near future of artificial intelligence could be. Philosophy & Technology, 32(1), 1–15. https://doi.org/10.1007/s13347-019-00345-y.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2020). Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions. Science and Engineering Ethics, 26(1), 89–120.
Milano, S., Taddeo, M., and Floridi, L. (2019). Recommender systems and their ethical challenges. Available at SSRN 3378581.
Nield, T. (2019). Is deep learning already hitting its limitations? And is another AI winter coming? Towards Data Science. https://towardsdatascience.com/is-deep-learning-already-hitting-its-limitations-c81826082ac3.
Schuchmann, S. (2019). Probability of an approaching AI winter. Towards Data Science. https://towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
Walch, K. (2019). Are we heading for another AI winter soon? Forbes. https://www.forbes.com/sites/cognitiveworld/2019/10/20/are-we-heading-for-another-ai-winter-soon/#783bf81256d6.
Watson, D. S., and Floridi, L. (forthcoming). The explanation game: a formal framework for interpretable machine learning. Synthese.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Floridi, L. AI and Its New Winter: from Myths to Realities. Philos. Technol. 33, 1–3 (2020). https://doi.org/10.1007/s13347-020-00396-6