We describe problems of social orders, and we discuss whether these problems will emerge during the cooperation of digital selves.

It is a fatal mistake to think that irrational results of social problem solutions lead back to irrational behaviors. In a prisoner’s dilemma situation, rational-acting persons produce irrational results. In a social dilemma situation, the Nash equilibrium is not Pareto optimal, or a Pareto optimal situation is only achievable if coordination is allowed. We present this situation in the following examples.

In the chicken game, two persons pilot their cars on collision course. The one who avoids the collision first is the looser. If nobody avoids the collision, then both are dead or highly injured. If both persons try to avoid the collision, then nobody has lost his face. The worst situation happens if both stay on course. The corresponding normal form of the game is presented in Fig. 6.1.

Fig. 6.1
figure 1

Chicken game payoff matrix

Two Nash equilibriums can be identified, namely, the one with the payoffs (2,4) and (4,2). The equilibrium “avoid collision” and “not avoid collision” favors the column player, and the equilibrium “not avoid collision” and “avoid collision” favors the line player. But if both players select the favorable strategy “not avoid collision,” then a catastrophe is inevitable.

The famous chicken game can be mapped onto international conflicts as presented in Fig. 6.2 for the Iran conflict. The situation can only be solved through negotiations in order to find a compromise, which is not a Nash equilibrium.

Fig. 6.2
figure 2

International conflict matrix

The next example is an N person prisoner’s dilemma. Let us have a look at the following decision situation. N persons can decide between the strategies S1 and S2. The following payoffs will be given to people selecting S1 respectively S2:

$$ \mathrm{S}1=2\mathrm{x} $$
$$ \mathrm{S}2=3\mathrm{x}+3 $$

S1 and S2 are the strategies available for a player; x is the number of players who are selecting strategy S1. In summary, we have N players.

Let us assume that the group consists of 100 players and 40 of them selected strategy S1. Then they get 80 points and the 60 other players get 123 points. But if all players would have selected S1, then they all would have gotten 200 points. In this case, S1 is the cooperative strategy, and S2 is the defecting strategy. Where is the break-even point? The break-even point is reached when less than 66 people are selecting S1. That means that at least 45 people select S2. In this case, all get less than 200 points.

Thus, a few people can destroy the fruits of the work of the whole group, and if these few people are more than 44 out of 100, then they even get less than they would have achieved through total cooperation. In fact, the best result is achieved if 99 persons select strategy S1. In this case, the defector achieves 300 points and the rest of the group only 198. Which of the two strategies is Nash and which one Pareto? Let us look at the payoff matrix if only two people are playing the game (see Fig. 6.3).

Fig. 6.3
figure 3

Two players of the N-players game

We see clearly that S2 is the Nash equilibrium which yields the payoff (3,3) for both players. If they both would have selected the S1 strategy, which is a Pareto optimum, they would be better off. However, by selecting the S1 strategy, there is the danger that one of them (the defector) selects S2. In this case, the defector gets the maximum payoff (6), and the co-operator gets the minimum (2). This is again a typical prisoner’s dilemma.

The above-described game can be generalized to public good games which are typical for social dilemmas. The difference between a public good game and an N-players game is that in a public good game, a player can decide how much he/she is willing to cooperate.

The problem of organ donations is a typical public good game. Everybody wishes that the collective good organ is available but not many people possess an organ donation pass. If all people follow the S1 strategy which means cooperation, the benefit is maximal for all. Nonetheless, there is a minority contributing and holding an organ donation pass. The solution for such problems is to find incentives so that people switch from the S2 strategy to the S1 strategy. One solution would be that owners of an organ donation pass are preferred when they need an organ transplantation. Other examples are vaccine passes as used during the COVID crisis where people owing such a pass witness less hurdles in daily activities such as traveling or shopping.

There are many more examples of public good games, such as voter turnout rates in democracies, joining of non-profit associations, joining protest demonstrations, contributing to any kind of common group results, or keeping a clean environment where incentives for e-cars in the form of tax reductions are applied or for big industries carbon trading certificates are provided. Incentives try to dissuade people and companies from defection so that Pareto optima can be achieved. By these means, social dilemmas are mitigated.

Since depletion of resources will always result in some kind of social dilemmas, we have to ask ourselves if digital selves might be confronted with the same sort of social order problems. There are two answers to that question.

The first answer is that digital selves should act according to the game theory. This would be an emotion-free, cold world where only utility functions and decisions based on statistics are ruling the behavior of selves.

The second answer is that digital selves and humans will coexist for a long time. Therefore, for the public benefit, digital selves will have to adopt the human way of thinking and behaving. For instance, in a mixed traffic, it makes no sense that autonomous cars follow their own learned algorithms ignoring human driving habits. Thus, digital selves need to balance the collective perspective of humans and trans-humans. Until recently, we designed human-centric AI systems. For the future, we will probably need humanity-inspired AI systems incorporating cultural norms, values, beliefs, and all emotional aspects of human life, so that humans and digital selves can coexist. From this, expectation follows that humans and digital selves represent each other’s context which may require representation and intervention in the course of adaptation and collaborative action.