If I were to pick a year to mark the beginning of the commercial Web, I might suggest 2004, when Facebook was launched, and Google held its IPO (“initial public offering”). Before that, the debate on ethical issues—from privacy to bias, from moderating illegal or unethical content to the protection of IPR (intellectual property rights), from fake news and disinformation to the digital divide—had been largely academic. Not in the negative, metaphorical sense of practically irrelevant, but literally: most of us discussing these problems worked in higher education. They were predictable problems and, since the end of the 1980s, at conferences, in specialised publications, or university lectures and seminars, we discussed them as fundamental and pressing, both ethically and socially. At the first conference of the International Association for Computing and Philosophy (of which I was president), in 1986, among the topics on the program, we had the following: online teaching; how to teach mathematical logic with software that ran on DOSFootnote 1; and something that was called at the time “computer ethics”, which later became “information ethics”, and which today we often call “digital ethics”. But it was too early. Prevention is not applied; it is regretted during the cure.

More or less after 2004, concerns began to spread to public opinion as well. The commercialisation of the Web brought into everyday life ethical problems already present in specialised contexts, such as spyware, software that collects data without the user’s consent (the term was coined in 1995). Soon the pressure began to build up to improve companies’ strategies and policies, and update—or rather, upgrade—the regulatory framework. It was in that period that self-regulation started to appear as a strategy for dealing with the ethical crisis. I remember meetings in Brussels where it was common for managers, policymakers, legislators, politicians, civil servants, and technical experts to support the value of self-regulation, for example, in contexts such as free speech online. It seemed like a good idea. Already in those years, Facebook insisted on the opportunity not to legislate but to operate in a “soft” way (the expression “soft law” is used to refer to rules without direct binding effect), through codes of conduct which, for example, would have guaranteed the presence on the platform only of people over the age of 13 (I objected, even at the time, arguing that the empowerment of parents should not be equivalent to a shift in legal responsibility; if a child buys alcohol from a shop in England the parents may be reprimanded, but the shop is in legal trouble). The notion was circulating that the digital industry could formulate its own ethical codes and standards as well as request and monitor adherence to them, without the need for external controls or impositions. It was not a bad idea. And I use the double-negative on purpose, to endorse a limited and contextualised, yet still positive assessment of it. In the past, I have often argued in favour of self-regulation. Not as a definite, complete, or unique solution, but as a good step in the right direction, to be followed by many others, including steps of legal nature.

Many international relationships are based on soft law, for example. In particular, the Council of Europe promotes respect for human rights, democracy, and the rule of law through recommendations that indicate desirable behaviours and outcomes, but without sanctions for non-compliance. Recently, I introduced and defended the need for soft ethics (not just the hard one, which we learn in life, and which we study in the classics), which respects but goes beyond mere compliance with the law in force (Floridi, 2018). Soft ethics is not only post-feasibility (both in the “ought implies can” sense and in the non-supererogatory sense) but also post-compliance. It assumes that, if the law is morally acceptable (if it is not, then the hard ethics case applies), once one has abided by the law, one may wish to do more than just follow the law. For example, paying employees more than is required by law is also a matter of soft ethics. In theory, through self-regulation, soft ethics, and soft law, companies could adopt better behaviour models, and operate in ways more ethically aligned to commercial, social, and environmental needs and values. And they could do so in faster, more agile, and efficient ways, an essential consideration for an industry that is evolving as rapidly as the digital one. All this could happen by anticipating and without having to wait for new legislation or international agreements. Self-regulation could prevent disasters, enable companies to seize more opportunities, and prepare companies to adapt to future legal frameworks, if it is developed and applied correctly. I had in mind the philosophy that had inspired one of the greatest Italian innovators of the last century, Adriano Olivetti (Peroni & Cecchetti, 2013). He had applied (what I call) a soft ethics strategy to run his company, with extraordinary success. So much so that, today, Olivetti’s factory, buildings, and residential units in the industrial city of Ivrea (Piedmont)—built according to the Community Movement ideal (Movimento Comunità)—are recognised to be a model of social project, and are listed as a UNESCO World Heritage Site.

Soft ethics could also contribute to the legislation itself, anticipating and experimenting with solutions that are more easily updatable and improvable. Soft ethics and soft law could work as sandpits. This is also recognised by the recent AI Act. I remain convinced that, in those years, it was realistic and reasonable to believe that self-regulation could help foster an ethically constructive and fruitful dialogue between the digital industry and society. As I have often argued, it was worth trying the path of self-regulation, not exclusively, but in a complementary sense to the evolving legislation. Unfortunately, things went very differently.

If I had to choose another year, this time to indicate the coming of age of the era of self-regulation, I may suggest 2014, when Google set up an Advisory Council (of which I was a member), to address the consequences of the ruling on the “right to be forgotten” by the Court of Justice of the European Union. It was the first of many other similar initiatives. That project had considerable visibility, a lot of exposure, and I believe it managed to achieve some success,Footnote 2 but, overall, the following era of self-regulation was disappointing. In subsequent years, the Facebook-Cambridge Analytica scandal in 2018—predictable and preventable—and the blatantly ill-conceived and very short-lived Advanced Technology External Advisory Council, set up by Google on AI ethics in 2019 (of which I was a member), showed how difficult and eventually ineffective self-regulation was. Ultimately, companies appeared to be reluctant or unable to solve their ethical problems, not necessarily in terms of resources, lobbying, and public relations, but in terms of top-level strategy, at the C-suite level, to improve mentality and wrong behaviours that were just too deeply rooted. When the industry recently reacted to the ethical challenges posed by AI by creating hundreds of codes, guidelines, manifestos, and statements (Floridi & Cowls, 2019), self-regulation appeared in all its embarrassing vacuity. The impression of “blue washing”Footnote 3 was strong and widespread. Today, Facebook’s Oversight Board, established in 2020, is an anachronism, a belated reaction to the end of an era during which self-regulation failed to make a significant difference. It is too late, not least because legislation has caught up (or it will soon) with the digital industry. In particular, in the EU, the General Data Protection Regulation (GDPR, in force since 2016) has been followed by legislative initiatives such as the Digital Markets Act, the Digital Services Act, and the AI Act (Floridi, 2021), to name the most significant. It is a regulatory movement likely to generate a vast Brussels effect, replacing soft-regulation, which never really took off, with legal compliance and penalties.

Companies have a crucial role to play beyond legal requirements, both socially and environmentally, and for this, soft ethics remains an essential element of competitive acceleration and “good citizenship”, in contexts where the legislation is either absent, ambiguous and in need of interpretation, or clear and ethically sound, but the era of self-regulation, as a strategy for dealing with the ethical challenges posed by the digital revolution, is over. It leaves behind, as a legacy, some good work. It cleared up things, by identifying and analysing some problems and some solutions. It improved cultural and social awareness. It helped to develop new, ethical sensitivities. And it did make some positive contributions to legislation, at least indirectly. For example, the High-Level Expert Group on Artificial Intelligence (of which I was a member), set up by the European Commission, saw the participation of industrial partners, and provided the ethical framework for the AI Act. It was not a collaboration to regret. However, the call for self-regulation, aimed by society at the digital industry, was largely ignored. It was a great but missed historical opportunity, very costly socially, environmentally, and economically. One only needs to think of the vast and ramified consequences of online disinformation. The time has come to acknowledge that, much as it might have been worth trying, self-regulation did not work. So, to use the words of the Gospel, now that the invitation has not been accepted, the alternative is “to force them [companies] to enter” (Luke 14:23). Self-regulation needs to be replaced by the law; the sooner, the better. Dura lex, sed lex digitalis is why the EU is at the forefront in the debate on digital governance.