1 Introduction

The information revolution is affecting our understanding about the world and about ourselves: we are interconnected informational organisms that share with biological organisms and engineered artefacts “a global environment ultimately made of information,” i.e., what Luciano Floridi calls “the infosphere” (Floridi 2013). A crucial feature of this new environment has to do with the complex ways in which multi agent (human/artificial) systems interact. This informational complexity challenges concepts and ways of reasoning through which, so far, we have grasped basic tenets of the law and politics. The starting point of the analysis concerns the use of information and communication technologies (ICTs): whereas, over the past centuries, human societies have been ICTs-related but mainly dependent on technologies that revolve around energy and basic resources, today’s societies are increasingly dependent on ICTs and, moreover, on information as a vital resource. In a nutshell, we are dealing with ICTs-driven societies (Floridi Forthcoming).

What this huge transformation means, from a legal and political viewpoint, can be illustrated with the ubiquitous nature of the information on the internet. The flow of this information transcends conventional boundaries of national legal systems, as shown by cases that scholars address as a part of their everyday work in the fields of information technology (IT)-Law , i.e., data protection, computer crimes, digital copyright, e-commerce, and so forth. This flow of information jeopardizes traditional assumptions of legal and political thought, by increasing the complexity of human societies. ICTs-driven societies are in fact characterized by a collective behaviour, which emerges from large networks of individual components, without central control, or simple rules of operation. In addition, these systems present a sophisticated signalling and information processing, through which they adapt to the environment and, what is more, spontaneous orders evolve through such informational complexity. Although, in his seminal book The Sciences of the Artificial (new ed. 1996), Herbert Simon used to warn that complexity is “too general a subject to have much content,” he pinpointed cases where this approach to the complexity of the subject matter can particularly be fruitful: “particularly classes of complex systems possessing strong properties that provide a fulcrum for theorizing and generalizing can serve as the foci of attention” (Simon 1996, p. 181).

Here, we can start appreciating how the complexity of ICTs-driven societies affects canonical tenets of legal and political thought, in four different ways. Figure 1 helps me illustrate this informational approach to the complexity of current legal systems.

Fig. 1
figure 1

The legal complexity of ICTs-driven societies

First, the idea of the law as a set of rules enforced through the menace of physical sanctions (e.g., Kelsen 1949) often falls short in coping with the new legal and political challenges of the information revolution: identity thefts, spamming, phishing, viruses, and cyber attacks have increased over the past decade, regardless of harsh national laws like the US anti-spam act from 2003. Furthermore, a number of issues, such as national security, cyber-terrorism, availability of resources and connectivity, are systemic, that is, they concern the whole infrastructure and environment of today’s ICTs-driven societies and, thus, these issues have to be tackled at international and transnational levels. Unsurprisingly, national law-making activism is short of breath, and this is why constitutional powers of national governments have been joined—and even replaced—by the network of competences and institutions summarized by the idea of governance. Leaving aside how this profound transformation affects the sovereignty of national states, much as democratic processes and models of political legitimacy, attention should be drawn to how often the modern state’s monopoly of power and legitimate violence is over in this context. National sovereign states, although still relevant, should be conceived as one of the agents in “the formation and stewardship of the formal and informal rules that regulate the public realm,” that is, how Hyden, Court and Mease define the notion of governance (in Grindle 2005, p. 14).

Second, the scenario of ICTs-driven societies appears increasingly complex since the quantity of information grows and its theoretical compression decreases (Chaitin 2005). To be fair, this trend is not new: some have summed it up with the very process through which pre-modern communities converted into industrial and ICTs-related societies, up to current post-industrial, or ICTs-driven, societies (di Robilant 1973). Others have traced this complexity back to the emergence of spontaneous orders with multiple political and legal sources: for instance, in Chapter 2 of the first volume of Law, Legislation, and Liberty (1973), Hayek affirms that “one of our main contentions will be that very complex orders, comprising more particular facts than any brain could ascertain or manipulate, can be brought about only through forces inducing the formation of spontaneous orders” (Hayek 1982, p. 38). Whilst this latter analysis dwelt on the forces of local customs, international uses, and transnational markets, what is original today concerns the evolutionary processes of spontaneous orders that are ICTs-dependent, ubiquitous and, well, “complex.” Contemplate the political, legal and economical relevance of what scholars present as network effect (Pagallo 2006; Pagallo and Ruffo 2007; Ormerod 2012). On this basis, legislators, policy makers and, generally speaking, governance actors shall preventively understand the nature of the field in which they aim to intervene or, maybe, to interfere: in a word, today’s kosmos and the evolution of spontaneous orders “onlife” as opposed to the taxis of governance and the constructivism of political planning.

Third, the information politics of ICTs-driven societies is far more complex than ICTs-related ones because governance actors should not only be grasped as determining the rules of the game through laws, statutes, agreements, and so forth. In addition to the traditional hard and soft law-tools of governance, such as national rules, international treaties, codes of conduct, guidelines, or the standardization of best practices, the new scenarios of the information revolution have increasingly suggested the aim to govern current ICTs-driven societies through the mechanisms of design , codes and architectures. Admittedly, some of these technological measures are not necessarily digital and yet, current advancements of technology have obliged legislators and policy makers to forge more sophisticated ways to think about legal enforcement. All in all, most of today’s legal and political challenges of the information revolution have to do with the twofold features of “generative technologies” (Zittrain 2008), such as, say, the personal computers and the ways PCs ubiquitously transmit information on the internet. Although this technology allows innovation, experimentation and the wide-open Web of creative anarchy, PCs permit the spread of spam, viruses and copyright infringements, that call into question the aforementioned notion of the law as (i) made of commands; (ii) enforced through physical sanctions; (iii) within the territory of a sovereign state. Some countries, like China, have built up systems of filters and re-routers, detours and dead-ends, to keep internet users on the state-approved online path. Other states, such as France or South Korea, have endorsed the so-called “three strikes”-doctrine, as a part of the graduated system which ends up with the user internet disconnection after three warnings of allegedly copyright infringements. At the end of the day, we should evaluate governance actors as game designers that deal with the twofold features of generative ICTs, in accordance with the different aims design may have, namely the aim to change people’s behaviour, the aim to decrease the impact of harm-generating conducts; or, even, to prevent such harm-generating conducts from occurring.

Finally, the increasing complexity of today’s ICT-driven societies affects the meaning of traditional legal concepts, such as reasonable foreseeability, liability, responsibility, and “legal causation.” Consider the use of unmanned aerial systems (UAS), and the current debate on whether and how we should change the EU Regulation 216/2008 and even the 1948 Chicago Convention on International Civil Aviation, so as to allow the (semi-) autonomous flight of the drones. Here, we have to pay attention to the responsibility of UAS operators, manufacturers, maintenance and safety contractors, air traffic controllers or contracting parties, that interact with autonomous or semi-autonomous machines, to avoid ground damage, air-to-air collisions, communication interferences, piracy, environmental concerns, illegal searches in constitutional law, down to violation of the landowner’s right and claims of nuisance and trespass in tort law. The increasing capability of machines to be “independent of real time UAS-pilot control input,” according to the UK Defence Standards definition of autonomous flight (2011), impacts on the traditional ability of philosophers (and lawyers) to sever the chain of responsibility via notions of causation and “fault.” In his 1996 paper Liability for Distributed Artificial Intelligence, Curtis Karnow (Karnow 1996) proposed the example of “a hypothetical intelligent programming environment which handles air traffic control” such as “Alef.” The advancement of AI technology and, generally speaking, of autonomous artificial agents would ultimately break down “classic cause and effect analysis.” Additionally, it seems problematic to determine the types of harm that may supervene with the functioning of an entire processing system such as Alef’s. In the phrasing of Karnow:

No judge can isolate the ‘legal’ causes of injury from the pervasive electronic hum in which they operate, nor separate causes from the digital universe which gives them their mutable shape and shifting sense. The result is a snarled tangle of cause and effect as impossible to sequester as the winds of the air, or the currents of the ocean (op. cit.).

The different ways in which this flow of information jeopardizes basic assumptions of the law and politics is stressed throughout this volume. Luciano Floridi calls for “a new philosophy of politics among us” Yiannis Laouris draws the attention to how “future societies will have to design and implement technologies and policies to safeguard the true individual human rights and freedom” Sarah Oates dwells on the nature of the public agora that “should be conceptualized and protected in a way that tips the balance away from the elites and toward the citizens” May Thorseth insists on the possibility of public use of reason in the realm of digital transition, since “a virtual reality may very well be communicative in a Habermasian sense” Charles Ess and Mireille Hildebrandt cast light on modern Western conceptions of liberal democracies and power relations in non-state societies, so as to “illuminate questions of trust and virtual experiences as critical components of ‘onlife’ in new ways”. Whilst these issues are intertwined with the impact of digitalization “on our processes of knowing,” Judith Simon presents such issues as “the epistemic responsibilities in entangled digital environments.”

In this chapter the aim is to reassess these ideas in connection with the concept of “governance” and, in particular, of “good enough governance” as developed by the United Nations over the past decades, that is, from Kofi Anan’s inauguration speech as UN Secretary-General in July 1997, to work by Merilee Grindle (2002, 2005, and 2010; however, I will refer only to Grindle 2005). Consequently, this chapter is presented in four sections: as in Plato’s early dialogues, it seems fruitful to start with some definitions in Sect. 2, namely the different ways in which scholars refer to the idea of “governance.” Then, attention is drawn to three different levels of analysis that concern the notion of “good onlife governance,” that is, the ethical, legal and technological challenges of the information revolution, as examined in Sect. 3. Next, the focus is on the kosmos-side of the “onlife experience” via the network approach illustrated in Sect. 4: the aim is to emphasize how the topological properties of today’s ICTs-driven societies and their kosmos affect the political planning of lawmakers and, hence, any good onlife governance. Finally, these ideas are deepened with the distinction between game players and game designers in Sect. 5. In addition to the traditional hard and soft law-tools of governance, the governance of complex multi-agent systems that interact “onlife,” does increasingly hinge on the technicalities of design mechanisms.

2 Defining Governance

We have already seen how the information revolution jeopardizes key traditional assumptions of legal and political philosophy, such as the state’s monopoly of the legitimate use of force and the law conceived as a set of rules enforced through the menace of physical sanctions. Whilst an increasing number of issues have to be addressed at international and transnational levels, national sovereign states should be considered as one, albeit relevant, agent in the network of competences and institutions summarized by the idea of governance.

In Good Enough Governance (2005), Merilee Grindle provides eight meanings of governance: in this section, it suffices to quote two of them. On the one hand, according to the World Bank, the idea of governance concerns “the process and institutions through which decisions are made and authority in a country is exercised” (in Grindle 2005, p. 14). On the other hand, Hyden, Court and Mease refer to “the formation and stewardship of the formal and informal rules that regulate the public realm, the arena in which state as well as economic and societal actors interact to make decisions” (ibid.). On this basis, the notion of governance can be furthered as a matter of “good” governance. In the case of the World Bank, focus should be on inclusiveness and accountability established in three key areas, namely, (i) “selection, accountability and replacement of authorities”; (ii) “efficiency of institutions, regulations, resource management”; and, (iii) “respect for institutions, laws and interactions among players in civil society, business, and politics.” In the case of Hyden, Court and Mease, the concept of good governance can be measured along six dimensions, i.e., “participation, fairness, decency, efficiency, accountability, and transparency,” in each of the following arenas: “civil society, political society, government, bureaucracy, economic society, judiciary.”

Drawing on such definitions, Merilee Grindle has objected to the length of the good governance agenda, because “interventions thought to contribute to the ends of economic and political development need to be questioned, prioritized, and made relevant to the conditions of individual countries. They need to be assessed in light of historical evidence, sequence, and timing, and they should be selected carefully in terms of their contributions to particular ends” (Grindle 2005, p. 1). By following this methodological approach to what should be deemed as “good enough,” what are then the issues that ought to be questioned, prioritized and made relevant, so as to pinpoint what is new in the legal and political dimension of our concept reengineering exercise?

In his brilliant In Search of Jefferson’s Moose (2009), David Post proposes an analogy between the American West of 1787 and today’s cyberspace :

Cyberspace is not the American West of 1787, of course. But like the American West of 1787 is (or at least it has been) a Jeffersonian kind of place… And like the West of 1787, cyberspace poses some hard questions, and could use some new ideas, about governance, and law, and order, and scale. The engineers have bequeathed to us a remarkable instrument, one that has managed to solve prodigious technical problems associated with communication on a global scale. The problem is the one that Jefferson and his contemporaries faced: How do you build “republican” institutions—institutions that respect equal worth of all individuals and their right to participate in the formation of the rules under which they live—that scale? (Post 2009, pp. 116–117)

The question begets three different levels of analysis. The first viewpoint is ethical and has to do with the foundation of any good onlife governance; the second level is both legal and political, since it concerns the distinction between the emergence of spontaneous orders in the legal field, and human (political) planning; the third perspective is related to the aim to embed legal safeguards into ICTs and other types of technology. From a methodological stance, each level of abstraction can be grasped as an interface made up of a set of features, that is, the observables of the analysis (Floridi 2008). By changing the interface, the analysis of the observables and variables of the three levels of abstraction should strengthen our comprehension of the onlife experience and, more particularly, of today’s governance. In accordance with some principles of information ethics (Floridi 2013), the emergence of spontaneous orders, and matters of design and scale, what is new in the legal and political dimension of our concept reengineering exercise is thus pinpointed through such observables of the analysis, as the right balance between representation and resolution at the first level of abstraction; notions of nodes, diameters of the network, and links, to grasp the second level of abstraction, and so forth. These different levels of analysis, discussed separately in the next section, are illustrated with Fig. 2. The aim is to shed light on what ought to be prioritized, and made relevant, in our concept reengineering exercise as that which is “good enough” in the governance of the onlife experience .

Fig. 2
figure 2

“Good Enough” in the governance of the onlife experience

3 Three Levels of Analysis

The first level of analysis concerning any good onlife governance regards the foundations of what Floridi conceives as an “efficient” and “intelligent” multi-agent system, the model of which may represent a goal that could successfully orient our political strategy in terms of transparency and tolerance: “Finding the right balance between representation and resolution, while implementing the agreement to agree on the basis of ethical principles that are informed by universal human rights, is a current major challenge for liberal democracies in which ICTs will increasingly strengthen the representational side.” On the basis of this right balance between representation and resolution, we have thus to assess how the information revolution reshapes models of political legitimacy and democratic processes, much as republican institutions that shall “respect equal worth of all individuals” (Post 2009). Since this is the subject matter of Floridi’s contribution in this volume (see above, pp. xx–xx), let me skip this part of the analysis.

The second level concerns Friedrich Hayek’s classical distinction between kosmos and taxis, i.e., evolution vs. constructivism, spontaneous orders vs. human (political) planning. Recent empirical evidence confirms that the informational complexity of human interaction is not reducible to taxis alone and, moreover, orders spontaneously emerge from the complexity of the environment through specific laws of evolution (Pagallo 2010). Most of the time, today’s research on governance , good governance, and good enough governance focuses on the taxis-side of political dynamics, namely, the decisions of institutional, societal, and economical actors, as a set of rules or instructions for the determination of other informational objects and agents in the system. Still, we should reflect on the properties of the onlife multi-agent systems as a complex network that adapts to the environment through learning and evolutionary processes, such as sophisticated signalling and information mechanisms. Complex systems are characterized by a collective behaviour that emerges from large networks of individual components, although no central control or simple rules of operation direct them. Accordingly, legislators, policy makers and, generally speaking, governance actors shall preventively understand the nature of the field in which they aim to intervene or, maybe, interfere (Pagallo 2012a). The point can be illustrated with a metaphor of Lon Fuller: “The law can act as a gardener who prunes an imperfectly growing tree in order to help the tree realize its own capacity for perfection. This can occur only when all concerned genuinely want the tree to grow, and to grow properly. Our task is to make them want this.” Of course, as it occurs with all the metaphors, we should take Fuller’s parallel with a pinch of salt: in the case of the good onlife governance, the “tree” can indeed strike back, as shown by how many attempts to govern the dynamics of complex multi-agent systems on the internet have been unsuccessful because of the response of the kosmos. Recall the US Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA), and how these bills miserably failed in winter 2011–2012.

The third level of the analysis can be summed up with the distinction between game players and game designers (Floridi 2013; Pagallo 2012b). Although political planning does not exhaust the complexity of human interaction, it does not follow, pace Hayek, that taxis cannot shape the evolution of kosmos. On the contrary, political decisions can determine the rules of the game as well as the very architecture of the system. Consider the ways some Western democracies and authoritarian regimes alike have specified the functions of state action on the internet. As mentioned above in the introduction, the “three strikes”-doctrine has been endorsed by some countries, such as France or South Korea, to enforce copyright laws, whereas systems of filters and re-routers, detours and dead-ends, have been adopted by such countries, as China, to keep individuals on the state-approved online path. Although some of these architectural measures are not necessarily digital, e.g., the installation of speed bumps in roads as a means to reduce the velocity of cars, current advancements of technology have obliged legislators, policy makers, and governance actors to forge more sophisticated ways to think about legal enforcement and, moreover, the information revolution has made such decisions a critical part of the governance of the entire system. This is why, on 19 April 2012, Neelie Kroes properly insisted on the open structure of the internet and its neutrality as key principles of this very governance: “With a truly open, universal platform, we can deliver choice and competition; innovation and opportunity; freedom and democratic accountability” (Kroes 2012, p. 2).

These different levels of analysis, to be sure, affect each other: game designers should take into account the development of spontaneous orders , much as, say, the transparent governance of a complex multi-agent system can ultimately hinge on the technicalities of design mechanisms. By paying attention to the specificity of the political dimension in our concept reengineering exercise, however, let me prevent a twofold misunderstanding. At times, scholars address the challenges of the information revolution to the traditional models of political legitimacy and democratic processes as if the aim were to find the magic bullet. Vice versa, others have devoted themselves to debunk these myths, such as a new direct online democracy, a digital communism, and so forth, by simply reversing the paradise of such techno-enthusiasts (Morozov 2011). All in all, we should conceive today’s information revolution in a sober way, that is, as a set of constraints and possibilities that transform or reshape the environment of people’s interaction. On one hand, this profound transformation affects norms, competences, and institutions of today’s governance, much as people’s autonomy and the right of the individuals to have a say in the decisions affecting them. What is at stake here revolves around a new “right balance” between representation and resolution: suffice it to mention the debate on the role that national sovereign states should have in today’s internet governance , vis-à-vis such technical organizations as, for example, ICANN. On the other hand, what makes the governance of ICTs-driven societies unique concerns how the properties of today’s kosmos may affect political planning and, hence, the design of any good onlife governance, i.e., the second and third levels of abstraction illustrated with Fig. 2 above. Next section deepens this latter viewpoint with some tenets of network theory and, more particularly, in accordance with the topological properties of today’s online kosmos and the emergence of spontaneous orders . Then, Sect. 5 brings us back to the taxis side of the onlife governance, by examining the ways in which the decisions of game designers can impinge on collective and individual autonomy .

4 The Topology of Onlife Networks

Several spontaneous orders on the internet present the topological features of scale free-networks and “small worlds .” To grasp how the complexity of such topological properties affect any political planning, have a look at Fig. 3 with the key parameters of every network, namely (i) its nodes, (ii) the average distance between nodes or diameter of the network, and (iii) its clustering coefficients. This allows us to single out three models.

Fig. 3
figure 3

Three topological models

The first one is represented by a regular network in which all of the nodes have the same number of links: this network has high clustering coefficients but a long diameter since the degree of separation between nodes is high.

The second model is a random network with opposite features: it presents low clustering coefficients but a very short diameter. The explanation is that random links exponentially reduce the degree of separation between nodes in the network.

The third model is a small world-network : its peculiarity depends on the apparent deviation from the properties of both regular and random networks. Like regular networks, small world-networks present high clustering coefficients, but they also share with random networks a short characteristic path length, i.e., the nodes of the network need few steps in order to reach each other.

As you can see, in light of Fig. 3, in the regular network there are 20 nodes, each of which has 4 links, so that the blue node (the brighter one on the left) would need at least 5 steps to reach the red one (the brighter on the right). What is striking with a small-world network is how random links exponentially reduce the degree of separation between nodes: for instance, if 3 nodes are randomly rewired, the degrees of separation decrease from 5 to 3. This means that, in a circle of 6 billion (people) nodes as our world could be represented today, if random links in the network would be about 2 out of 10,000, the degree of separation turns out to be 8. But if they are 3 out of 10,000, then 5!

Since the pioneering work of Stanley Milgram (1967) and, later, of Mark Granovetter (1973), the idea of small world-networks became in few years one of the key words of contemporary scientific research by fostering a large set of empirical studies on the topology of complex systems . Significant effort has been made in order to structure analytical models able to capture the nature of small world-networks. Here, it suffices to mention only two of these. The first small world-model was proposed by Duncan Watts and Steven Strogatz (1998): they suggested to randomly rewire a small fraction of the edges belonging to a low-dimensional regular lattice so as to prove that the degrees of separation in the network would exponentially decrease. Yet, contrary to random networks, the shortening of the diameter proceeded along with high clustering coefficients as in regular networks. These small world-features explain the results of Milgram’s and Granovetter’s research because short diameters of the network and high clustering coefficients quantify both the low degrees of separation between two citizens picked up randomly in such a complex network like the American society studied by Milgram in the mid 1960s, and the “strength of weak ties” stressed by Granovetter in the early 1970s .

The second analytical model we need to examine was defined by Albert-Lászlo Barabási (2002): he noted that most real world networks, such as the internet, grow by continuous addition of new nodes whereas the likelihood of connecting to a node would depend upon its degree of connectivity. This sort of special attachment in a growing system explains what Watts and Strogatz apparently missed, namely, the power-law distribution of the network in a topological scale-free perspective: small world-networks in the real world are indeed characterized by few nodes with very high values and by most nodes with low connectivity. The presence of hubs or of a small fraction of nodes with a much higher degree than the average offers the key to comprehend why small world-networks can be both highly clustered and scale-free. This occurs when small, tightly interlinked clusters of nodes are connected into larger, less cohesive groups.

Drawing on this research, we can deepen the notion of complexity mentioned in the introduction. Today’s onlife kosmos can indeed be comprehended in accordance with the nature of the hubs and the degree of their connectivity in a small world network, because the emergence of spontaneous orders , e.g. peer-to-peer (P2P) file-sharing systems on the internet, often goes hand in hand with the hierarchical structure of these networks (Pagallo and Durante 2009; Glorioso et al. 2010). Significantly, in The Sciences of the Artificial (new ed. 1996), Herbert Simon insisted on this point, i.e., the notion of “hierarchy” as the clue for grasping the architecture of complexity and, moreover, the idea of “nearly decomposable systems” that reconciles rigid top-down and bottom-up approaches. In the wording of Simon, “the clusters of dense interaction in the chart” of social interaction “will identify a rather well-defined hierarchic structure” (op. cit., p. 186). Furthermore, according to the “empty world hypothesis,” the term of near decomposability denotes that “most things are only weakly connected with most other things; for a tolerable description of reality only a tiny fraction of all possible interactions needs to be taken into account” (Simon 1996, p. 209). Recall the difference between regular networks, random networks, and small worlds, mentioned above: Simon’s “empty world hypothesis” corresponds to the notion of hubs, since such hubs not only offer the common connections mediating the short path lengths between the nodes of the network, but also elucidate the clusters of dense interaction and complexity in the chart of social relationships .

These topological properties of the network introduce a crucial point on how the structure of the kosmos may affect the political planning of the taxis and, hence, any “good onlife governance.” Whilst I assume that there is no kosmos without taxis in the “onlife experience,” governance actors should really know the subject matter which they intend to govern. The point can be illustrated with the words of Paul Ormerod:

In a scale-free network, we know that we need to identify the well-connected individuals and to try by some means to induce them to change their behaviours. In a random network, we know that there is a critical value of the proportion of agents we need to influence in order to encourage or mitigate the spread of a particular mode of behaviour or opinion across the network. This at least gives us an idea of the scale of the effort required, and tells us that money and time which is unlikely to generate the critical mass is money and time wasted. In a small-world context, targeting our efforts is more difficult, but at least we know that it is the long-range connectors, the agents with links across different parts of the network, or who have connections into several relevant networks, who are the most fruitful to target. (Ormerod 2012, p. 275)

Yet, a crucial aspect of the analysis concerns more the evaluation, than the description, of the kosmos, which taxis aims to discipline. Lawmakers, policy makers and governance actors should not only know whether they are dealing with a random network, a small-world network, a scale-free network, and so forth, since they have to evaluate the kind of information that is distributed according to the topological properties of a regular network, a random network, etc. Consider the following spectrum in the field of social interaction, which empirical evidence has proved to be a small world network : at one end, the “small worlds” of the internet in the early 2000s and their positive effects (Barabási 2002); at the other end, what the COPLINK program illustrated in the mid 2000s, namely that “narcotics networks are small-world with short average path lengths ranging from 4.5–8.5 and have scale-free degree distributions with power law exponents of 0.85–1.3” (Kaza et al. 2005). In between, we find more controversial cases, such as the “small worlds” of some P2P networks as Gnutella (Pagallo and Ruffo 2007). In light of this spectrum, let me reassess the different levels of analysis illustrated above with Fig. 2. From an ethical viewpoint, what should be avoided or minimized is the “impoverishment of the infosphere,” or entropy, whilst “the flourishing of informational entities as well as the whole infosphere ought to be promoted by preserving, cultivating and enriching their properties” (Floridi 2006). From a legal and political stance, what is at stake here concerns the ways in which the new scenarios of the information revolution have suggested national and international lawmakers more sophisticated forms of legal enforcement, complementing the traditional hard tools of the law, much as softer forms of legalized governance, such as the standardization of best practices and guidelines, through the mechanisms of design , codes, and IT architectures. Many impasses of today’s legal and political systems can indeed be tackled, by embedding normative constraints and constitutional safeguards into ICTs. After the topological properties and ethical challenges of the current kosmos, let me examine this taxis-side of the onlife governance separately: the next section explores how game designers may shape the onlife experience.

5 The Design of the Onlife Experience

The concept of design can be understood as the act of working out the shape of objects: we actually mould the form of products and processes, together with the structure of spaces and places, so as to comply with regulatory frameworks. Such a shaping is not necessarily digital: as mentioned above in Sect. 3, consider the installation of speed bumps in roads as a means to reduce the velocity of cars (lest drivers opt to destroy their own vehicles). Still, the information revolution has obliged policy makers to forge more sophisticated ways of legal enforcement through the design of ICT interfaces , default settings, self-enforcing technologies , and so forth. According to the phrasing of Norman Potter in his 1968 book on What is a Designer (new ed. 2002), a crucial distinction should be stressed between designing spaces (environmental design), objects (product design), or messages (communication design). Moreover, in their work on The Design with Intent Method (2010), Lockton, Harrison and Stanton describe 101 ways in which products can influence the behaviour of their users. In light of Fig. 4, it suffices to focus on three different ways in which governance actors may design the onlife experience.

Fig. 4
figure 4

How game designers may shape the onlife experience

First, design may aim to encourage the change of social behaviour. Think about the free-riding phenomenon on P2P networks, where most peers tend to use these systems to find information and download their favourite files without contributing to the performance of the system. Whilst this selfish behaviour is triggered by many properties of P2P applications, like anonymity and hard traceability of the nodes, designers have proposed ways to tackle the issue through incentives based on trust (e.g., reputation mechanisms), trade (e.g., services in return), or alternatively slowing down the connectivity of the user who does not help the process of file-sharing (Glorioso et al. 2010). For example, two very popular P2P systems, namely µTorrent and Azureus/Vuze, have inbuilt anti-leech features that cap the download speed of the users, if their upload speed is too low (note that a low upload speed may in turn hinge on the policy of some ISPs that count both uploads and downloads as monthly data quota). In addition, design mechanisms can induce the change of people’s behaviour via friendly interfaces, location-based services, and so forth. These examples are particularly relevant because encouraging individuals to change their behaviour prevents risks of paternalism, when the purpose of design is to encourage such a change of behaviour by widening the range of choices and options. At its best, this latter design policy is illustrated by the open architecture of a web “out of control” (Berners-Lee 1999).

Second, design mechanisms may aim to decrease the impact of harm-generating behaviour rather than changing people’s conduct, that is, the goal is to prevent the impoverishment of the agents and of the whole infosphere, rather than directly promoting their flourishing. This further aim of design is well represented by efforts in security measures that can be conceived of as a sort of digital airbag: as it occurs with friendly interfaces, this kind of design mechanism prevents claims of paternalism, because it does not impinge on individual autonomy , no more than traditional airbags affect how people drive. Contrary to design mechanisms that intend to broaden individual choices, however, the design of digital airbags may raise issues of strong moral and legal responsibility, much as conflicts of interests. A typical instance is given by the processing of patient names in hospitals via information systems, where patient names should be kept separated from data on medical treatments or health status. How about users, including doctors, who may find such mechanism too onerous? Furthermore, responsibility for this type of mechanisms is intertwined with the technical meticulousness of the project and its reliability, e.g., security measures for the informative systems of hospitals or, say, an atomic plant. Rather than establishing the overall probability of a serious accident, focus should be here on the weaknesses in the safety system, ranking the accident sequences in connection with the probability of their occurrence, so as to compare different event sequences and to identify critical elements in these sequences. All in all, in Eugene Spafford’s phrasing, it would be important that governance actors, sub specie game designers, fully understand that “the only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards—and even then I have my doubts” (in Garfinkel and Spafford 1997).

Third, there is the most critical aim of design , namely to prevent harm generating-behaviour from occurring through the use of self-enforcing technologies , such as DRMs in the field of intellectual property protection, or some versions of automatic privacy by design (e.g., Cavoukian 2010). Of course, serious issues of national security, connectivity and availability of resources, much as child pornography or cyber-terrorism, may suggest endorsing such type of design mechanism, though the latter should be conceived as the exception, or last resort option, for the governance of the onlife experience. Contemplate some of the ethical, legal, and technical reasons that make problematic the aim of design to automatically prevent harmful conduct from occurring. As to the ethical reasons, specific design choices may result in conflicts between values and, vice versa, conflicts between values may impact on the features of design: we have evidence that “some technical artefacts bear directly and systematically on the realization, or suppression, of particular configurations of social, ethical, and political values” (Flanagan et al. 2008). As to the legal reasons against this type of design policy, the development and use of self-enforcing technologies risk to curtail both collective and individual autonomy severely. Basic tenets of the rule of law would be at risk, since people’s behaviour would unilaterally be determined on the basis of technology, rather than by choices of the relevant political institutions: what is imperilled is “the public understanding of law with its application eliminating a useful interface between the law’s terms and its application” (Zittrain 2007).

Finally, attention should be drawn to the technical difficulties of achieving such total control through design : doubts are cast by “a rich body of scholarship concerning the theory and practice of ‘traditional’ rule-based regulation [that] bears witness to the impossibility of designing regulatory standards in the form of legal rules that will hit their target with perfect accuracy” (Yeung 2007). Indeed, there is the technical difficulty of applying to a machine concepts traditionally employed by lawyers, through the formalization of norms, rights, or duties: after all, legal safeguards often present highly context-dependent notions as, say, security measures, personal data, or data controllers, that raise a number of relevant problems when reducing the informational complexity of a legal system where concepts and relations are subject to evolution (Pagallo 2010). To the best of my knowledge, it is impossible to program software so as to prevent forms of harm generating-behaviour even in such simple cases as defamations: these constraints emphasize critical facets of design that suggest to reverse the burden of proof when the use of allegedly perfect self-enforcing technologies is at stake. In the wording of the US Supreme Court’s decision on the Communications Decency Act (“CDA”) from 26 June 1997, “as a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation… is more likely to interfere with the free exchange of ideas than to encourage it.”

6 Conclusions

The purpose of this chapter was to cast light on some of the issues that ought to be questioned, prioritized, and made relevant, so as to stress what is specific to the legal and political dimensions of the onlife governance. Starting with current definitions of governance , good governance, and good enough governance in Sect. 2, the analysis dwelt on the complex ways in which multi-agent systems interact in light of the difference between kosmos and taxis, on one side, and between game players and game designers, on the other. By taking into account the examples of local customs, international uses, and transnational markets, that is, the traditional forms of spontaneous orders examined by a Nobel laureate (Hayek 1982), what is critical today concerns, on the one hand, the evolutionary processes of multi-agent systems that are ICTs-dependent, ubiquitous, and moreover, cannot be reduced to the taxis-side of governance . Going back to the debate on the ethical foundations of today’s cyberspace, e.g., David Post’s republican institutions that shall respect the equal worth of all individuals, it is admittedly an open question how such institutions should be built, and even conceived of (Post 2009; Solum 2009; Reed 2012; etc.): yet, the paper has shown how often the efficiency and legitimacy of traditional hard and soft-law tools of governance depend on what scholars present as “network effect .” Legislators, policy makers and, generally speaking, governance actors shall preventively understand the political, legal and economical relevance of what spontaneously emerges and evolves onlife , namely that which we discussed above in Sect. 4.

On the other hand, what is specific of today’s onlife governance revolves around the role of game designers. In addition to the debate on the institutional issues of current governance, and how its traditional hard and soft law-tools should be distributed among political authorities, societal actors, and economic players, such as lobbies and stakeholders, the challenges of the information revolution have induced complementing such tools, e.g., guidelines and best practices, through the mechanisms of design , codes and architectures. This new scenario affects basic pillars of the law and democratic processes, by reshaping the balance between resolution and representation, much as the right of the individuals to have a say in the decisions affecting them. Here, the three levels of analysis discussed above in Sect. 5 are critical. When the aim is to broaden the range of people’s choices, so as to encourage the change of their behaviour, such design policy is legally and politically sound: this approach to design prevents threats of paternalism that hinge on the regulatory tools of technology, since it fosters collective and individual autonomy . Likewise, the aim of design to decrease the impact of harm-generating behaviour through the use of digital airbags, such as security measures or user friendly interfaces, respects collective and individual autonomy, because this approach to design does not impinge on people’s choices, no more than traditional airbags affect how individuals behave on the highways. Yet, to complement the hard and soft-law tools of governance by design entails its own risks, when the aim is to prevent harm-generating behaviour from occurring.

Although many impasses of today’s legal and political systems can properly be addressed by embedding legal safeguards into ICT and other kinds of technology, there are several legal, ethical and technical reasons why the use of allegedly perfect self-enforcing technologies raises serious threats of paternalism and, even, of authoritarianism. Whether DRMs, automatic versions of the principle of privacy by design, three-strikes approaches, China’s “Great Firewall,” or Western systems of filters in order to control the flow of information on the internet, the result is the modelling of individual conduct. As game designers dealing with the challenges of the information revolution, this paper suggested why governance actors ought to consider the use of self-enforcing technologies as the exception, or a last resort option, to minimize the informational entropy of the system or, vice versa, to promote its flourishing and that of its informational objects. What is at stake here is “complex,” because the legal and political challenges of the information revolution often concern the whole infrastructure and environment of people’s interaction. Recent statutes, such as HADOPI in France, or DEA in UK, show how new ways of protecting citizens even against themselves do materialize.