In this section, we provide some data on the origin of digital twins and provide insight why we consider them crucial for today when it comes to digital transformation. We also are projecting future developments in the context of humanoid socio-technical systems.

Grieves, who had raised the term in 2003, in his white paper “introduces the concept of a ‘Digital Twin’ as a virtual representation of what has been produced. Compare a Digital Twin to its engineering design to better understand what was produced versus what was designed, tightening the loop between design and execution” (Grieves, 2014). Given the background in industrial production, twin developers aim to increase transparency, and trigger efficiency gains, mostly in terms of faster process execution times, increased flexibility of organizing processes, and higher levels of security and safety.

As such, digital twins do not only designate a new technology but rather stand for the methodological integration of heterogeneous components, requiring protocols for exchanging even large amount of data between various technologies. Originally designed as virtual double of physical products, digital twins are increasingly becoming digital replicas of Cyber-Physical Systems (CPS). Not only the simulation of processes over the entire life cycle of a product but also envisioned architectures become core activities when CPS evolve (cf. Zhuang et al., 2017; Grieves, 2019).

A digital twin is consequently a virtual representation of a process, service, or product with parts of it in the real world. There are no predefined limits to what developers transform by digital means and represent in the virtual world: objects like planes including all kinds of operation characteristics, regions or urban infrastructures, or biological processes copying the human organism. Developers simulate functional behavior with the digital twin, in order to determine how capabilities can be changed depending on different parameters. This information helps to modify any product or service in a purely virtual manner before instantiating it for operation.

Following the original concept, essential building blocks of a digital twin are physical products or elements that are equipped with Internet-based sensors. These sensors record numerous development data and, later on, operational data in real time, once the product is in use. A collector component records all behavior data and stores it in a centralized repository, mainly using cloud services and some digital platform for processing these data. Intelligent actors contain AI algorithms to evaluate the data. Actors could be equipped with interactive access, e.g., in product design with 3D technology or virtual reality, to visualize data and to facilitate simulating certain product properties.

In addition to the continuous optimization of the product during operation, the feedback of the usage data to the producer—often also called digital thread—enables important findings for further development and the concerned developers. This knowledge is helpful, e.g., when it comes to adapting product features better to the needs of the users and costumers. Most applications of digital twins focus on virtual models created on the basis of existing operational and design data to study the behavior of certain components at different conditions. Due to virtual prototyping, engineers and designers work close together across departmental boundaries.

1 Humanoid Socio-technical Systems: The Next Level

Before focusing on digital twin representations and digital identity development, we need to take a closer look on the type of systems digital twins could represent and they may become part of. Before the Internet of Things (IoT) and Industry 4.0 (I4.0) have been introduced, socio-technical systems have been composed of technical and social subsystems that are mutually related to accomplish (work) tasks within social systems, such as organizations. “Technical” thereby refers to structure and a broader sense of technicalities. “Socio-technical” refers to the interrelatedness of social and technical aspects of a system the technical and social systems are part of, e.g., an organization, an enterprise, an institution, or a sector. This context determines the relations between the technical and social systems, e.g., termed user interface between interactive technical systems and humans. It has become a central part of technology development, establishing dedicated areas such as usability engineering and user experience (UX).

The next level in socio-technical system development is targeting the merging of man and machineFootnote 1 as digital systems will cross over with material ones in many different facets. In that context, trans-humanism and singularity are crucial concepts as they allow passing on the control over further developments to artificial systems. Hence, in this section, we will start detailing some underlying assumption of trans-human developments.

We ground our system understanding on the layers shown in the figure on the architecture of digital selves in cyber-physical ecosystems. CPS are grounded on IoT- and data-related infrastructure components. They comprise micro-electronic and sensor systems for intelligent products, services, and production including manufacturing. They also include the provision of ICT resources, in particular for Internet-related communication and service interaction, as well as data management services (Fig. 2.1).

Fig. 2.1
figure 1

Digital selves—architecture

Digital twins as model representations of CPS can have various flavors, ranging from early-phase product development to real-time models of CPS for simulation and operation (cf. Jones et al., 2020). They require modeling and execution capabilities according to the purpose. They need to include adaptation capabilities at runtime, in order to mirror real-world behavior and influence actuators in real time. Since CPS are composed of heterogeneous components, several layers of abstractions can be required for digital self/selves development and operation. They enable communities to organize their interaction on a collective or societal basis.

Digital selves enrich digital twins with socio-cognitive algorithms. When adapting to situations, they become digital socio-cognitive counterparts of actors in the real world. As part of a socio-technical system, they operate as decision-making and support agents depending on the state of interactions. Like digital twins, they can operate at any stage of development, ranging from ideation to recycling.

This enrichment of digital twins refers to digital humanism, as it concerns all kinds of objects, including functional roles, handling of work objects and products, and judgment of events and situations. Whenever a situation requires social judgment and/or social interventions, digital selves collect and analyze relevant data to create socio-cognitive behavior using digital twin capabilities.

Like digital twins, digital selves can be used in a variety of sectors relevant for socio-technical interaction, including service, logistics, and healthcare. Digital selves can be highly adaptive, and thus accompany design and release processes, and influence the creation, operation, and function of an entire socio-technical system. As socio-cognitive models are executed to implement actor- or system-specific behavior, both the effectiveness and efficiency of these systems can be influenced by digital selves similar to social systems.

For instance, in smart healthcare systems, care-taking activities can be influenced significantly. Consider robot assistance issues. For some medical homecare users, robotic assistance may be welcome since they want to keep their individual privacy at home and prefer being supported by robot systems. Others may prefer social interaction in the course of medical caretaking at home and decide for human support. Each of them will perceive effectiveness with respect to their support of choice, even when efficiency might decrease due to required technology adaptation and/or social interventions when home health services are provided.

Trans-humanism is about evolving intelligent life beyond its currently human form and overcoming human limitations by means of science and technology. Albeit claims to guide trans-human development by life-promoting principles and values (More, 1990), advocating the improvement of human capacities through advanced technologies has triggered intense discussions about future IT and artifact developments. In particular, Bostrom (2009, 2014) has argued self-emergent artificial systems could finally control the development of intelligence and, thus, human life.

An essential driver of this development is artificial intelligence. Digital artifacts, such as robots, have increasingly become autonomous, allowing them to reproduce and evolve under their control (cf. Gonzales-Jimenez, 2018). Key is their capability of self-awareness (cf. Amir et al., 2007). According to McCarthy (2004), it comprises:

  • Knowledge about one’s own permanent aspects or of one’s relationships to others

  • Awareness of one’s sensory experiences and their implications

  • Awareness of one’s beliefs, desires, intentions, and goals

  • Knowledge about one’s own knowledge or lack thereof

  • Awareness of one’s attitudes such as hopes, fears, regrets, and expectations

  • The ability to perform mental actions such as forming or dropping an intention

Some forms of self-awareness have been considered useful for digital artifacts, in particular (cf. Amir et al., 2007):

  • Reasoning about what they are able to do and what not

  • Reasoning about ways to achieve new knowledge and abilities

  • Representing how they arrived at its current beliefs

  • Maintaining a reflective view on current beliefs and using this knowledge to revise their beliefs in light of new information

  • Regarding their entire “mental” state up to the present as an object and having the ability to transcend it and think about it

The recognition of the social aspect of self-awareness (cf. Amir et al., 2007) seems to be crucial, as a digital artifact may have a system theory of itself that it can use to interact with others. Originally thought of a property to be used in multi-agent systems for dealing with errors in communication, argumentation, negotiation, etc., it could be useful for reflecting on one’s own development state and articulating meaningful inputs to co-creative processes. Consequently, Amir et al. (2007) have considered several types of self-awareness:

“Explicit Self-Awareness

The computer system has a full-fledged self-model that represents knowledge about itself (e. g., its autobiography, current situation, activities, abilities, goals, knowledge, intentions, knowledge about others’ knowledge of its knowledge) in a form that lends itself to use by its general reasoning system and can be communicated (possibly in some language) by a general reasoning system.

Self-Monitoring

The computer system monitors, evaluates, and intervenes in its internal processes, in a purposive way. This does not presuppose that the monitored information lends itself to general reasoning; in fact, there may be no general reasoning (e.g., operating systems, insects).

Self-Explanation

The agent can recount and justify its actions and inferences. In itself, this does not presuppose a full-fledged self-model, or integration of knowledge needed for self-explanation into a general reasoning capability.” (ibid, p.1)

Cognitive architectures such as ACT-R, SOAR, or CLARION mentioned in the second part of this book take these properties into consideration. However, social cooperation does not always need the agent’s understanding. There are forms of cooperation that are evolutionary, self-organizing, and unaware. Such cooperation forms do not require joint intentions, mutual awareness, or shared plans among cooperating agents. In order to investigate intra-agent behavior of crowds regarding emotions, we therefore use more simplistic agents in our own simulations.

Nevertheless, awareness of the self and other actors can be considered as one of the important factors of trans-human (system) developments. Whether self-awareness (consciousness) can be artificially created or will automatically emerge is more deeply discussed at the beginning of the second part of this book. Apart from that, from the engineering point of view, we have to ask who is in charge. We capture this issue in part III reflecting on constructing mechanisms featuring consciousness through awareness and embodied cognition.

2 Singularity: Control of the Next System Generation?

Disruptive digital technologies have always been hard to convey due to unpredictable future developments based on those technologies crossing boundaries. High impact on society or on human existence as currently perceived requires to reflect on the limits of human boundaries that could disrupt the system that we live in. Singularity is such a cross-border issue. Trans-humanism as the integration of technology with our human biological computing systems considers human beings as physical bodies powered by electrical impulses controlled by the brain as computing machinery. It allows fine-tuning behavior within societal limits. These limits might not be valid for trans-humanist systems. Artificial intelligence algorithms enable through integrating the human body unlocking potential for novel community structures redefining the meaning of being and creating.

As detailed by Spindler and Stary (2019, p.1307), “singularity (more precisely, technological singularity) is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible by humans—the control and capability of further development is assigned to the digital system, resulting in unfathomable changes to human civilization. It will touch all humans, the entire human system as an entity.”

In their 2013 video, Steve Aoki and Ray Kurzweil have introduced singularity by the following sequence:

  • “My Name is Kay—Singularity”

  • “I wanna tell you about our future, I estimate around 2025, that we will have expanded our intelligence a billionfold by merging with the Artificial Intelligence we are creating.”

  • “But that’s such a profound expansion that we borrow this metaphor from physics and call it a singularity.”

  • “We’ll be a hybrid of biological and non-biological intelligence,” he says. “But the non-biological part of ourselves will also be part of our consciousness” (see also https://www.techbubble.info/blog/singularity-transhumanism).

Companies, such as Dangerous Things (https://dangerousthings.com/), anticipate bio-hacking as the next phase in human (sic!) development (likely to be support by kits such as xNTi kit https://dangerousthings.com/shop/xnti/). Evolution is characterized by self-organized “co-”evolution: “Our bodies are our own, to do what we want with. The ‘socially acceptable’ of tomorrow is formed by boundaries pushed today, and we’re excited to be a part of it. We hope you will be too” (https://dangerousthings.com/evolution/). Awareness tests such as artificial consciousness tests could help in finding out whether the resulting systems or trans-humans should be deemed morally improper to further collaborate with others (cf. Bishop, 2018).

Accelerating the evolution of intelligent life beyond its currently human form by means of science and technology is grounded in a variety of disciplines focusing on the dynamic interplay between humanity and the technology developments. On the technology side, biotechnology, robotics, information technology, molecular nanotechnology, and artificial general intelligence are at the focus of interest (cf. Goldblatt, 2002). According to the trans-humanist declaration (http://humanityplus.org/philosophy/transhumanist-declaration/), trans-humanists strive for the ethical use of technologies when developing post-human actors.

They feature uploading, i.e., the process of transferring an intellect from a biological brain to a computer system through uploading, and anticipate a point in time “when the rate of technological development becomes so rapid that the progress curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. The most likely cause of a singularity would be the creation of some form of rapidly self-enhancing “greater-than-human intelligence” ” (http://humanityplus.org/philosophy/) (cf. Kurzweil, 2005).

Trans-humanist protagonists envision overcoming aging and widening cognitive capabilities mainly by whole brain emulation while creating substrate-independent minds. According to the declaration, system development should be guided risk management and social processes “where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented” (see declaration item 4 in Trans-humanist Declaration—http://humanityplus.org/philosophy/transhumanist-declaration/).

Post-humans could become self-informed and self-developing socio-technical system (elements)

Increasing personal choices over how individuals design their live based on assistive and complementary technologies (termed “human modification and enhancement technologies” in item 9 of the Trans-humanist Declaration http://humanityplus.org/philosophy/transhumanist-declaration/) is a vision that seems to attract many people (cf. Singularity Network—https://www.facebook.com/groups/techsingularity/). Trans-humanists envision as design entity “post-humans” through continuous growth of intelligence that can be uploaded to computer systems. When laying ground to levels of consciousness that human brains cannot access so far, post-humans could either be completely synthetic artificial intelligences or composed of many smaller systems augmenting biological human capabilities, which finally cumulates in profound enrichments of human capabilities (cf. More & Vita-More 2013).

Besides applying advanced nano-, neuronal, and genetic engineering, artificial intelligence and advanced information management play a crucial role when developing intermediary forms between the human and the post-human (which are termed trans-humans). Subjects of design and later on designers themselves due to their self-replicating capability are therefore intelligent machines in an ever-increasing range of tasks and level of autonomy. Replacing increasingly human intelligence by machine intelligence is expected at some point to create machine intelligence superior to single human cognitive intelligence, characterized by effective and efficient planning, and self-emergence (cf. Bostrom, 2014).

Human-centeredness depends on the design of trans-humans

Based on the current understanding of humanoid systems, socio-technical system development becomes a multi-faceted task. Humanoid systems are human-like systems, primarily known from robotics—they are also known as androids. They have artificially intelligent system components that are matched to one another based on human models. Using their spectrum of applications, it is possible to investigate which human-like behavior of an artificial system people accept to what extent in which situations and ultimately use them meaningfully. The Pepper robotic system (see http://www.SoftBankRobotics.com) is not only able to respond to questions and give instructions on how to cope with tasks but also to take action, such as showing the way. With the ability to move even small loads, thanks to lightweight arms and “four-finger hands” (cf. Rollin’ Justin http://www.dlr.de), nothing stands in the way of a pervasive networking of humanoid systems with human effective areas.

Humanoid systems therefore have a mobile base that allows autonomous operation over a long range, e.g., in contrast to mobile devices that require a physical carrier system. Their core components are motion detection sensors and stereo cameras, which enable 3D reconstructions of the environment to be created even in unstructured, dynamically changing environments. In the collaborative areas of activity and technologies that are now increasingly emerging, the humanoid system, which is fundamentally independent and operates without human support, is not used in isolation. It is to be connected to other, especially IoT applications (Li et al., 2018). Humanoid systems thus become part of federated systems, i.e., they can work both autonomously and in combination with other systems.

The design of integrated humanoid systems faces the challenge of taking different development approaches into account (cf. Stary & Fuchs-Kittowski, 2020). The increasing degree of maturity of humanoid systems through activated degrees of freedom leads to the characteristic of pursuing several goals at the same time while adhering to a task hierarchy. A robot system can, e.g., serve drinks and observe the surroundings in order to avoid collisions in the surroundings, if this functionality is to be used on the service level in socio-technical environments. However, coordination with human activities on the one hand and with technologies already in use on the other hand is necessary. It depends on the area of application whether and how people configure humanoid systems and adapt them to cope with tasks. These organizational requirements may lead to additional technical functionality based on technological integration (Rosly et al., 2020).

A typical example is the home healthcare sector, which is equipped with M-IoT (Medical Internet of Things) applications (Sadoughi et al., 2020). Here, nursing staff, for example, must develop the willingness to adjust humanoid systems to patients and configure them in such a way that patient data is transferred for emergency care in order to enable effective relief. In terms of acceptance, humanoid systems become part of integrated systems.

In addition, the construction of a functional humanoid robot in the sense of a human-like artificial intelligence can lead to novel design and development tasks (cf. Andersen et al., 2019). Configuration and adaptation become a learning process (cf. Stary, 2020a) that is implemented in a humanoid system with the help of self-learning algorithms. Participation in social processes can be influenced on the basis of observations and learning psychological data (cf. Can & Seibt, 2016). This means that, in addition to the functional activities, social tasks could be addressed by humanoid systems.

3 Complex Adaptive Systems

When social tasks can be taken over by digital systems, the entire socio-technical system is continuously changing. Since the nature of each actor can be of different kind, either technical, social, or in between, we need to conceptualize such system according to the resulting complexity and dynamic change. Consequently, we adopt the concept of Complex Adaptive Systems considering socio-technical systems incorporating trans-humanist and cyber-physical developments.

Complex Adaptive Systems (CAS) started in the USA to oppose the European “natural science” tradition in the area of cybernetics and systems (Chan, 2001). Although CAS theory shares the subject of general properties of complex systems across traditional disciplinary boundaries (like in cybernetics and systems), it relies on computer simulations as a research tool (as pointed out by Holland, 1992, initially) and considers less integrated or “organized” systems, such as ecologies, in contrast to organisms, machines, or enterprises. Many artificial systems are characterized by apparently complex behaviors due to often nonlinear spatio-temporal interactions among a large number of component systems at different levels of organization; they have been termed Complex Adaptive Systems (CAS).

CAS are dynamic systems able to adapt in and evolve with a changing environment. It is important to realize that there is no separation between a system and its environment in the idea that a system always adapts to a changing environment. Rather, the concept to be examined is that of a system closely linked with all other related systems making up an ecosystem. Within such a context, change needs to be seen in terms of co-evolution with all other related systems, rather than as adaptation to a separate and distinct environment (Chan, 2001 p.2). CAS have several constituent properties (ibid., p.3ff—citation marked italics):

  • Distributed Control: There is no single centralized control mechanism that governs system behavior. Although the interrelationships between elements of the system produce coherence, the overall behavior usually cannot be explained merely as the sum of individual parts.

  • Connectivity: A system does not only consist of relations between its elements but also of relations with its environment. Consequently, a decision or action by one part within a system influences all other related parts.

  • Co-evolution with Co-evolution: Elements in a system can change based on their interactions with one another and with the environment. Additionally, patterns of behavior can change over time.

  • Sensitive Dependence on Initial Conditions: CAS are sensitive due to their dependence on initial conditions. Changes in the input characteristics or rules are not correlated in a linear fashion with outcomes. Small changes can have a surprisingly profound impact on overall behavior, or vice versa, a huge upset to the system may not affect it. … This means the end of scientific certainty, which is a property of “simple” systems (e.g., the ones used for electric lights, motors, and electronic devices). Consequently, socio-technical systems are fundamentally unpredictable in their behavior. Long-term prediction and control are therefore believed to not be possible in complex systems.

  • Emergent Order: Complexity in complex adaptive systems refers to the potential for emergent behavior in complex and unpredictable phenomena. Once systems are not in an equilibrium, they tend to create different structures and new patterns of relationships. … Complex adaptive systems function best when they combine order and chaos in an appropriate measure—this phenomenon has been termed Far from Equilibrium. CAS in their dynamics combine both order and chaos and, thus, stability and instability, competition and cooperation, and order and disorder—being termed State of Paradox.

In the CAS figure, a schema of a complex socio-technical system is shown at the bottom as a group of different types of elements. They are far from equilibrium, when forming interdependent, dynamic evolutionary networks that are sensitively dependent and fractionally organized (cf. Fichter et al., 2010). Taking a CAS perspective requires system thinking in terms of networked, however modular, elements acting in parallel (cf. Holland, 2006). In socio-technical settings, these elements can be individuals, technical systems, or their features. Understood as CAS, they form and use internal models to anticipate the future, basing current actions on expected outcomes. It is this attribute that distinguishes CAS from other kinds of complex systems; it is also this attribute that makes the emergent behavior of CAS intricate and difficult to understand (Holland, 1992, p.24)—see Fig. 2.2.

Fig. 2.2
figure 2

Schema of a Complex Adaptive System and its dynamic behavior

According to CAS theory, in CAS settings, each element sends and receives signals in parallel, as the setting is constituted by each element’s interactions with other elements. Actions are triggered upon other elements’ signals. In this way, each element also adapts and, thus, evolves through changes over time. Self-regulation and self-management have become crucial assets in dynamically changing socio-technical settings, in particular organizations (Allee, 2009; Firestone & McElroy, 2003). Self-organization of concerned stakeholders as system elements is considered key handling requirements for change. However, for self-organization to happen, system elements need to have access to relevant information of a situation. Since the behavior of autonomous elements cannot be predicted, a structured process is required to guide behavior management according to the understanding of system elements and their capabilities to change the situation (cf. Allee, 2009; Stary, 2014).

From the interaction of the individual system elements arises some kind of global property or pattern, something that could not have been predicted from understanding each particular element (Chan, 2001). A typical emergent phenomenon is a social media momentum stemming from the interaction of the users when deciding upon a certain behavior, such as spontaneous meetings. Global properties result from the aggregate behavior of individual elements. Although it is still an open question how to apply CAS to engineering systems with emergent behavior (cf. Holland, 2006), in the case of socio-technical system design, preprogrammed behavior is a challenging task, as humans or elements may change behavioral structures in response to external or internal stimuli. When (self)-organizing these development processes, usually they generate more complexity.