In Part I, we discussed mainly engineering issues and solutions to socio-technical systems dealing with digital twins. We defined properties which are mandatory for digital selves represented by digital twin elements or agents. We argued that properties of agents need to be enhanced with behavior related to emotions. In this section, we present game theoretical approaches to emotion modeling. Since we investigate the collective behavior of crowds, we are interested in inter-agent relationships. Emulations of consciousness are out of scope of our investigations. Since a very specific property is simulated, it is feasible that the agents and their learning functions are more simplistic.

When writing these sentences in the year 2021, we just realized that in Graz (Austrian city), a research team managed to implement a full-fledged digital heart (see http://healthcare-in-Europe.com). This artificial heart is produced out of numerous MR and EKG data and other heart investigations. Imaging algorithms construct the heart based on all these very special investigations conducted on a patient. To simulate the heartbeat, many million variables need to be calculated in real time on a super computer. AI algorithms support the calculation of the electrical wave propagations for the simulated heartbeat.

The development team asserts that the simulated heartbeat is already close to the real heartbeat of a patient. They plan to apply more AI methods so that the heartbeat runs completely in synchrony with a patient. However, the authors did not publish yet any details about the AI methods applied. We suspect that neural networks are playing a major role in learning the heartbeats. The authors claim that within 2 years, it will be possible to treat a digital heart with artificial drugs. As a consequence, it will be possible to decide which of the several drug treatments turns out to be adequate after the implantation of a cardiac pacemaker.

This is indeed a revolution in medicine and in computer science. We ask ourselves if this is possible for the heart, why not for the human brain. And if we try to model the human brain on a digital machine, is this copy identical to the real brain? Is the digital twin capable of thinking? Does it have its own personality? Will consciousness emerge as a consequence of thinking or is it rather an illusion? Is there even a free will?

We can only speculate about the answers to these questions since to our best knowledge, there is no commonly approved theory of mind existing yet. However, neuroscientists are making progress in brain research. We know today that the human brain is organized in a hierarchy. Dehaene and Changeux (2011) investigated the transition from unconscious to conscious perception. They showed that according to Baar’s theory (Baar, 1988), this what we experience as perception is caused through the intensification of a single information entity which is then simultaneously transferred in different parts of the cerebral cortex. By applying functional MRT and EEGs, Dehaene discovered that the transformation process triggers certain synchronized rhythms of electrical activities. Presumably, these rhythms stimulate consciousness by activating a network of pyramid neurons located in the parietal and frontal brain regions which in turn cause top-down amplification. Dehaene discovered this phenomenon while experimenting with the picture recognition process of the human brain. Similar results were obtained during experiments with the tactile sense. It can be concluded that there is a threshold between unconscious data processing and conscious data processing.

Libet (2005) also discovered that a lot of the brain activities are not conscious at all. Based on his experiments, Libet concluded that our awareness of decision-making appears to be an illusion. According to Libet, there is also no free will.

Neurobiologists suspect that changes in the hormone level not only influence feelings and spirit but also the function of the brain. Obviously, parts of the hippocampus are changing its structure and its activity during specific hormone cycles of women. The implications of these observations are not yet well understood.

Eric Kandel (2018) presents a good overview how art is interpreted by the human brain. As computer scientists, we also find Kurzweil’s elaborations (Kurzweil, 2012) about the creation of a mind promising.

In order to mimic the hierarchies of the human brain, Kurzweil proposes the implementation of hidden Markov models in combination with self-learning neural networks. He is convinced that before 2050 or even earlier, we will bring the full power of human brains on silicon substrates. In contrast to our brain, artificial neural networks will learn much quicker. This is due to the fact that during the learning process, the human brain takes time to grow new dendrites. Technologically, we are able to construct artificial neural networks consisting of more neurons and synapses than the human brain. Humans cannot expand the size of brains on biological level in a short time. But thanks to exponential technological growth, we might do it with artificial neural networks in the farer future. Kurzweil has certainly a point there.

The question above, whether consciousness is an emergent property, is still not solved. It might be emergent if we follow Deahene’s observations. If we consider, e.g., hormones as important influencers, then we need to understand also their behavior. As soon as we understand their influence, we can design an algorithm which can be executed as a Turing machine on a real computer. If consciousness follows an algorithm somewhere in the neo-cortex—which has not yet been proven—then we can simulate also consciousness. If consciousness emerges automatically as a result of low-level perception processes, then we will see the results of virtual humans (humanoids) or virtual agents in a few decades.

We believe that Kurzweil’s ideas described more deeply in Part I will bring substantial progress in the sense that a good percentage of a brain will be simulated in silicon substrate in a few decades. And it could easily be that this brain might outperform average brains in certain general tasks due to quicker expansion options. Much more artificial neurons and synaptic connections yield more brain capacity, and the presentation of several millions of examples in a short time period yields faster learning cycles. However, as long as the influence of electro-chemical processes on synaptic circuits is not fully understood, the silicon brain might probably lack certain high-level functionalities. When writing these lines, robots can simulate the behavior of approximately 4-year-old children.

Since a digital self is a simulation, digital selves are different from one self, similar as human twins differ from each other. There is a difference whether the computer simulates a property or whether the machine understands what the simulation is about. It is therefore an illusion to think that we can replicate ourselves and thus exist for eternity.

Artificial neural networks are producing other and sometimes better strategies for problem-solving, as explained when addressing the AlphaGo Zero program. Consequently, there is an artificial intelligence which deviates from human intelligence. Thus, a digital self might develop its own artificial personality, at best.

Subsequently, we talk about agents when addressing the modeling and simulation of digital selves or artificial humans (humanoids). The term agent is a more pragmatic one and widely used in the simulation community. The simulation community mostly uses either a deductive approach or an inductive approach (Axelrod, 1997). In order to find consequences of assumptions, deduction is used. If we want to find patterns in data, we use an inductive approach.

In the deductive approach, social phenomena are represented either by rules or differential equations. However, as Troitzsch (Troitzsch & Gilbert, 2005) and Gilbert explain, differential equations are often difficult to study. There is sometimes no set of equations that can be solved to predict the system characteristics, and even more, group behavior and especially individual cognition (Conte & Castelfranchi, 1995) are hard to model. There is some parallelism when applying artificial neural networks, where engineering experience decides about the success of an implementation. We predict that simulations with differential equations and linear equations will play a major role in the future once the quantum computer is in place. A quantum computer has three main application areas: cryptography, searching and learning, and simulation. In the latter case especially linear equation, systems are very well suited for quantum computers because one basic gate operator, such as the hadamard operator, uses matrix operations.

The inductive approach is more often used by social scientists. This approach deduces generalizations from observations. The application of agent-based simulations observing emergent patterns belongs to this approach. However, Sun (2006) criticizes that most of these agents are often too simple. They comprise only few entities and lack cognitive modeling. As a consequence, such simplistic agent models do have problems to emulate human cognition. In his book about cognition and multi-agent interactions, Sun argues that cognitive architectures, such as ACT-R, SOAR, or CLARION, might be better vehicles to simulate human behavior because not only inter-agent processes but also complex intra-agent processes are considered.

The CLARION architecture (Sun, 2003) seems to be especially powerful since it combines built-in meta-cognitive constructs, action-centered and non-action-centered representations, top-down rule-based learning, and bottom-up reinforcement learning based on the Q-learning algorithm (Watkins, 1989). Sun argues that four distinct levels need to be orchestrated in a cognitive architecture: the sociological level, the psychological level, the componential level, and the physiological level. The sociological level simulates conventionally the relations in between agents. The psychological level covers individual experience, beliefs, concepts, and skills. The componential level looks at the different components of an agent and its different implementation methods. The physiological level focuses on elementary biological and physiological components of a system which need to be reverse engineered.

Another inductive approach is the application of game theory to social science. Game theory turns out to be a powerful theory when simulating social processes of large crowds. The theory of games was first formalized by Neumann and Morgenstern (1953). Human economic behavior has been studied since then extensively by using this theory.

Like with humans, communication and collaboration in between artificial agents are important steps further for the development of an artificial social intelligence. It was always argued by scientists that a machine that can think must include emotional aspects, because a vast amount of social behaviors in humans are emotionally driven (Castelfranci, 2000). If we want to construct human-like machines, then we need to integrate emotional aspects into our models. In the sequel, we show how emotions can be modeled, and we show how emotions can substantially influence the collective behavior of artificial agents.

We model the impact of rewards on the emotion jealousy from two very different perspectives, and we consider the population dynamics and equilibrium which can arise. First, we apply a deterministic model and then a stochastic one. Our modeling technique starts with one simple game theoretical approach. We use this approach because the social behavior which is expressed on inter-agent relations can be elegantly simulated.

In Chap. 5, we apply a more complex approach. In the sense of Sun, we model the sociological level and the componential level. We show how different kinds of emotions can be combined. However, cognitive details of intra-agent processes as explained by Carley and Newell (1994) are not considered in our models.

As explained, cognitive architectures do have their merits. Nevertheless, the problem how emotions can bypass decisions by activating some impulses (Löwenstein, 1996) is not well understood. It has also not yet been solved by these architectures. Furthermore, it is not known how emotions can influence cognitive decision processes (Löwenstein, 1996). As long as these problems are not solved by neuroscientists, we have to be careful in applying them in cognitive architectures. But serendipity effects can always occur. We are convinced that in the very long run, such multi-paradigm architectures might become obsolete because symbolic computing such as rule-based learning, planning, storing of data, and even remembering are implicitly performed by a single morphological architecture—the human brain.

For instance, remembering can be modeled with stochastic dynamic artificial networks known as bi-directional associative memories by using the Hopfield model (Hopfield, 1982, 1984; Hopfield & Tank, 1985, 1986). Consider the visual cortex as a region in the brain where many neurons together decode visual sensory perceptions as an example. There, the three-dimensional information is mapped on a two-dimensional field of neurons in the human cortex (Glickstein, 1988). Such structures can be modeled with a self-organizing Kohonen network (Kohonen, 1982, 1984; Kohonen et al., 1991). Similarly, different other brain regions demand other neural network topologies and corresponding learning algorithms. But all of them are based upon a morphologic structure consisting of neurons, dendrites, and synapses. Therefore, intra-agent processes are likely to be modeled exclusively with artificial neural networks in the long run.

But we also have to keep in mind that the general learning problem of neural networks is intractable. The size of the learning problem in artificial neural networks depends on the number of unknown variables which are represented through weights of the edges between the neurons and the synapses. These unknowns need to be calculated. A network with 100 unknown weights represents a bigger learning problem than a network with 10 weights. A polynomial runtime of the learning algorithms dependent on the number of unknown variables would be desirable. However, until recently, there is no algorithm known that such an algorithm exists. Therefore, we still face some NP-complete problems.

The human brain consists of approximately 1015 synapses. Thankfully, not all neurons are mutually connected. As explained, there is a hierarchical structure in the brain’s morphology. Certain regions are responsible for certain tasks. But there are still vast amounts of local connections left so that we can run into these NP-complete problems during the learning phase which is continuously performed in real time. However, we are convinced that in the very near future, quantum computers will overcome some of these problems.

Time has not yet come to model all cognitive processes with neural networks. We still lack a common theory of mind—we don’t know how hormones are influencing synaptic thresholds—and we still need more powerful computers so that the parallel thinking process can be modeled in real time. It is not the size of the shrinking memories which represents the problem, but the runtime required to perform respective actions. Nevertheless, for the simulation of cognitive intra-agent processes, such as values or beliefs, one could start with a simple neural network and extend this network similar to a bootstrapping process until a powerful artificial brain is formed.

Social cooperation does not always need the agent’s understanding as defined by Amir et al. (2007) and explained in Part I of this book. There are forms of cooperation that are evolutionary self-organizing and unaware. Such cooperation forms do not require joint intentions, mutual awareness, or shared plans among cooperating agents (Macy, 1998).

We are interested how emotions are changing the collective behavior of a group and vice versa. We don’t plan to emulate human cognition as long as there is no widely recognized unified theory available. Therefore, our agents are more simplistic. When we equip agents with emotion-like behaviors, they will show similar behaviors as humans. Jealousy might be influenced by rewards. Therefore, we examine the impact of rewards to jealousy. We show that different modeling techniques and different algorithms can yield deviating results for one and the same property. The phrase “what you see, you will be” is valid for humans as well as for agents.

It is therefore in our hands whether the future of artificial humans (humanoids) will be bright or rather dark in Leviathan’s sense. This reminds us on the old Indian allegory where a father tells his child that there are two animals fighting in his breast. One animal is full of negative emotions like hate and jealousy; the other animal is obliging and friendly. On the child’s question which one is going to win the fight, the father’s answer is “the animal which you feed is going to win.”

Consequently, the data and the world which we are presenting to the programs and the algorithms we are implementing do have impact to the behavior of artificial agents. The same relation holds for humans—experience, education, and epigenetic on the one hand and inherited properties on the other shape our behavior. But in contrast to computer programs revealing rational behavior, humans often act in an irrational way, since they are driven by emotions as well. We discuss this phenomenon in Chap. 6 of Part II.