Keywords

The Topic and Its Rationale

I intend to analyse how the processes of human enhancement brought about by the digital revolution (ICTs, AI, robotics) modify social identities, relationships, and social organisations, and under what conditions this revolution can shape organisational forms that are able to promote, rather than alienate, humanity.

A recent debate in France (Bienvault 2019) about the use of robots in care for the elderly ended with the question: ‘can the human relationship be the product of our technological feats?’. We must look for a plausible answer.

Personal and social identities, relations, and organisations are forced to take shape in the environment of a Digital Technological Matrix (henceforth DTM) that is a symbolic code, which tends to replace all the ontological, ideal, and moral symbolic matrices that have structured societies in the past. The concept of ‘matrix’ used here refers to the meaning that this concept has in my book on ‘the theological matrix of society’ (Donati 2010). It indicates a vision of the world that has a religious character, or functionally equivalent to that of religion, in that it expresses the ‘ultimate values’ of that society. In the case of DTM, these values are those of an indefinite evolution of humanity and the cosmos that replaces the representations of the world as a reality created and supported by a supernatural reality.Footnote 1

Obviously, I am talking about a trend, not only because in each society different cultural matrices coexist, in particular in the multicultural societies emerging today, but also because it is always possible that the conflicts between cultural matrices can intensify and diversify. What I argue is that DTM, like all cultural matrices that have a transcendental character, that is, express a vision of humanity and the cosmos in their ultimate ends, has an intrinsic propensity to become the dominant matrix, similarly to what happened with the cultural matrix of modernity that made traditional cultures marginal. As a form of Tech-Gnosis, the peculiarity of the DTM is that of making the boundaries between human and non-human labile and crossable in every way in order to foster hybrids. Hybridisation is defined here as entanglements and interchanges between digital machines, their ways of operating, and human elements in social practices.

Hybrids, however, are not random or purely contingent entities. They stem from complex interactional networks, in which social relations are mediated by the DTM. The processes of hybridisation of social identities and relations are selective and stratified according to the ways in which the human/non-human distinction is relationally thought of and practised. We are witnessing a co-evolution between AI/robotisation and human relations.

Three scenarios of hybridisation are outlined along with three kinds of societal morphogenesis: adaptive, turbulent, and relationally steered. The chapter examines these scenarios of co-evolution, their consequences, and the issues of social regulation.

I am not so interested in discussing whether AI or robots can be more or less human in themselves—or not human at all―but rather how they can interact with humans and affect their social relationships so as to generate a different kind of society characterised by the hybridisation between human and non-human.

I will tackle two major themes: (a) the first concerns the problem of how we can distinguish interhuman relations from the relationships between human beings and machines, which implies the need to clarify what the processes of hybridisation of identities and social relations consist of and how they happen; (b) the second concerns the consequences of digitalised technological innovations on the hybridisation of social institutions and organisations and, ultimately, the possible scenarios of a ‘hybridised society’.

Let Me Explain the Broader Cultural Framework to Which This Contribution Refers

The encyclical Laudato si’ aims at an integral ecology that includes the environmental, technological, economic, and social dimensions in the awareness that ‘everything is intimately related’ (§ 137). To achieve the goal of sustainable and integral development, these various aspects must be treated ‘in a balanced and interconnected manner’ so that ‘no one be left behind’ (§§ 2 and 4).

The idea that we can heal our relationship with nature without healing ‘all fundamental human relationships’ and therefore without fully including in this therapeutic action ‘the social dimension of the human being’ (to which Pope Francis adds the transcendent) is illusory. The vectors of ethics, culture, and spirituality converge on the social dimension: the ecological crisis is the expression of a deeper crisis that invests these essential elements of human existence in modernity and there are not two separate crises, ‘but a single and complex socio-environmental crisis’ (§§ 119 and 139).

For this reason, we must first try to straighten out the pillars of human relations and anthropology, which is the real ‘root’ of a crisis that distorts the development of science and technology, the mechanisms of the economy, the responsibilities of politics.

Science, technology, and economics are only a part and not the whole of the social as it is understood here, as are their relative behaviors and ways of understanding reality (§ 139). It must be said that the social includes the economic and the technology, and not vice versa. To the loss of orientation towards the common good, which reduces economy and technology to the only worry for profit and politics to the obsession for power (§ 198), one cannot think of answering exclusively by internal routes to economy and technology, thus remaining prisoners, perhaps in good faith and for good, of the ideology of their hegemony.

This chapter seeks to clarify the meaning of this perspective and make it operational.

The Pervasiveness of the Digital Matrix

In the famous science fiction movie of the same name (1999), the Matrix is depicted as a dystopian world in which reality, as perceived and lived by most humans, is actually a simulated reality created by sentient machines to subdue the human population while their bodies’ heat and electrical activity are used as an energy source. Sentient machines rule the world with lasers, explosions, and killer robots. This Matrix is a ‘dream world’, where cyborgs are supposed to simulate a superman who is a mixture of a super-animal and super-machine. In a famous dialogue between two protagonists, Morpheus says, ‘The Matrix is everywhere. It is all around us. Even now, in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to work... when you go to church... when you pay your taxes. It is the world that has been pulled over your eyes to blind you from the truth’. Neo asks, ‘What truth?’, to which Morpheus replies, ‘That you are a slave, Neo. Like everyone else, you were born into bondage. Into a prison that you cannot taste or see or touch. A prison for your mind’. This is what I would call the Matrix Land. In the end, the Matrix appears as it actually is: nothing but the green lines of a programming code that pervades all the environment of the human condition.

Leaving aside the aspects of science fiction, one can take the Matrix to mean the Digital Technological Mind that is made pervasive and omnipresent by the global ICT network constituted by all the tools and symbolic codes that operate on the basis of algorithms.

From the cultural point of view, the Digital Matrix (DTM) is the globalised symbolic code Footnote 2 from which digital artefacts are created in order to help or substitute human agency by mediating interhuman relations or by making them superfluous. From the structural and practical point of view, the DTM is the complex of all digital technologies, based on scientific knowledge and engineering, that consist of computerised devices, methods, systems, electronic machines (digital electronics or digital electronic circuits). Of course, the artefacts produced by the DTM can have different forms of intelligence and more or less autonomy (Lévy 1997).

In short, the DTM software is part of the cultural system, and its hardware fits into the social structures by occupying the positions that are nodes in the networks. It is important to understand: (1) first, that the DTM symbolic code plays a major or dominant role within the whole cultural system of society; (2) second, that it is the starting point of innovation processes (through the discoveries and inventions of scientific research that are subsequently applied in new technologies).

I will explain the second point in the next section (see Fig. 1). As to the first, I contend that the DTM symbolic code plays a role, in respect to all other cultural symbols, in the same way as the generalised symbolic medium of money has functionalised all the other generalised symbolic media to itself within modern society. Money has been (and still is) the G.O.D. (generator of diversity) of modern society. It has functionalised to itself power, influence, and value commitment.

Fig. 1
figure 1

The morphogenetic cycle (run by the DTM) that generates the hybridisation of society. Source: developed by the author

The DTM symbolic code is now the Generator of Diversity of transmodern society (or ‘onlife society’, i.e. the society of online life). As an instrumental symbol, it functionalises to itself all the other symbols, e.g. the finalistic symbols such as life and death, creation or evolution; the moral or normative symbols such as justice or injustice, honesty or dishonesty, normative or anormative, disassembling or reassembling, gluing or ungluing; and the value symbols such as worthy or unworthy, good or bad, positive or negative, pleasant or unpleasant.Footnote 3

The dualities inherent in all these symbols are treated in an apparently relational way since the binary code allows gradations and combinations of each type in the 0/1 sequences. They produce eigensymbols (eigenvalues), i.e. symbols autogenerated by the code itself. As I have argued elsewhere, the relational modes with which the Digital Matrix operates can be read in different ways, that is, as purely interactive flows producing only random and transactional outcomes (as relationist sociology claims) or as processes that generate relatively stable cultural and social structures that are emergent effects endowed with sui generis qualities and causal properties (Donati 2020).

According to the relationist viewpoint, digital technologies (machines) represent the material infrastructure of an anonymous DTM of communication that feeds the autonomisation of a multiplicity of communicative worlds separated from a humanistic conception of intersubjective and social relations. This DTM becomes detached from any traditional culture based on religious or moral premises and builds its own political power system, economy, and religion. Harari (2017) has taken this scenario to its extreme consequences, claiming that it prefigures how the quest for immortality, bliss, and divinity could shape humanity’s future. In this scenario, human beings will become economically useless because of human enhancement, robotisation, and artificial intelligence, and a new religion will take humanism’s place.

The rationale for this view is that the DTM promotes the digital transformation of the whole society, which is the change associated with the application of digital technology in all aspects of social life, and is expressed in the Infosphere, defined as the environment constituted by the global network of all devices in which AIs are at work receiving and transmitting information. The DTM is therefore the ‘environment’ of all social interactions, organisations, and systems. As such, it promotes a culture deprived of any teleology or teleonomy since it operates as a substitute for any rival moral or theological matrix.Footnote 4

The Hybridisation Issue

The hybridisation of society emerges from a process of social morphogenesis in which social structure, culture, and agency intertwine so to produce social and cultural hybrids together with a hybridised agency. Figure 1 schematises these processes, which consist in a series of subsequent morphogenetic cycles. Footnote 5

Each morphogenetic cycle can start from the side of the social structural domain or from the side of the cultural domain. Let us see what happens in either case.

  1. 1.

    If the morphogenic cycle begins on the side of the structural domain, for example when a robot is introduced in a certain social context (a factory, a school class, etc.), what happens is a redefinition of the social network and the social positions of the actors involved. As a consequence of the presence of the newcomer, structural interactions change and produce structural hybrids. These hybrids affect human agency in the direction of influencing her impact on the cultural domain.

  2. 2.

    If the morphogenetic cycle starts from the side of the cultural domain, for example, when new software (language, programs, algorithms) are invented and adopted by people, then new sociocultural interactions produce cultural hybrids. These cultural elaborations modify human agency, which introduces changes into the structural domain.

  3. 3.

    The human agency changes through active (reflexive) responses to the above changes. The possibilities for developing this morphogenetic process are certainly dependent on how the DTM operates in the cultural and structural domains, but we must consider the autonomous role of human agency.

One may wonder: which morphogenesis occurs in the human person (agency) and in her relational qualities and properties? What happens to social relations between humans and between humans and robots?

Figure 1 illustrates the relational structure and dynamics of co-evolution between digital machines, their ways of operating (according to their symbolic code), and human agency. Co-evolution can obviously have positive or negative results in terms of compatibility among the different components, and therefore it can generate both relational goods and relational evils.

Notice that the morphogenesis of human agency is both passive and active in both directions, towards the cultural as well as the structural domain. (a) It is passive because it is influenced alternatively by one of the two domains and (b) at the same time it is active towards the other domain. It is here that the body-mind relational unit has to confront the coherences or dissonances between the two domains. The symbol [ρ] means that there is a connection of some sort between the cultural and structural processes of hybridisation of the human person in her social identity and in her social relationships. Such a connection can be of different kinds, from the maximum of synergy (complementarity), as when cultural identity is well adapted to the position occupied in the social structure, to a maximum of conflict (contradiction), as when cultural identity conflicts with the position occupied in the social structure.

What is most relevant in Fig. 1 is to observe the dematerialisation of the human agency due to AIs operating in and through the quantum network (the internet) where information is transmitted with qubits. The process of hybridisation takes place in fact contaminating the relationships that operate on the basis of the principles of classical physics applied to the natural world with virtual relations that operate on the basis of the postulates of quantum physics, where the latter dematerialises the natural world.Footnote 6

The conceptual scheme summarised in Fig. 1 is fundamental to understand the hybridisation processes of social identities, relationships, and organisations in the light of the idea that, to speak the language of Adams (2006: 517), ‘hybridity becomes a complex potential’. These processes proceed according to a relation of contingent complementarity between changes in the cultural background that encourage the development of certain forms of virtual thinking and utopian discourse, on the one hand, and changes in the material production of new technologies (AI/robots) in the structural domain, on the other hand. Hybridisation occurs when culture (software) and structure (hardware) mutually influence and reinforce each other, as long as human agency adapts to these processes. Relations play the most crucial role because changes in identities and organisations depend on them. The hybridisation of society is based on the assumption that these processes operate in a functional way, making sure that the passivity of human agency translates into its active participation in such an evolution with minimal reflexivity or a reflexivity substantially dependent on the DTM. The kind and extent of hybridisation depend on how subjects reflect on their relationships, on which their internal conversation (personal reflexivity) depends.

The hybridisation of people’s identities and social organisations (families included) consists in the fact that these entities change their relational constitution to the extent that the cultural and structural processes of change affect (in Fig. 1 they ‘cross’) the human person and modify her ways of relating to herself, to others, and to the world. Agency is obviously dependent on how the brain and the mind of the person work. Both the brain and the mind are seriously influenced by the way technologies operate (Greenfield 2014) because they absorb certain ways of communicating, using and combining information and following performance logics that can ignore or alter analogical thinking. As Malabou (2009) claims, the human being has his essential difference in the plasticity of his mind that the robot will never have (synaptic plasticity is unique, there cannot be two identical brains).

Since hybridisation means some form of mutual adaptation between humans and AI bots, one can ask to what extent this mutual influence (up to mutual constitution) can arrive. Personally, I do not believe that humanoid robots can be or become ‘relational subjects’ in the sense of being able to relate to others and to the social context by forming a we-relation with adequate personal and relational reflexivity (Donati and Archer 2015). They cannot for various reasons (Donati 2020), in particular because their mind and their body–mind complex are ontologically different from that of humans, so that human–human interaction (HHI) and human–robot interactions (HRI) are ontologically different (of course, if we assume that human relations are constituted by reciprocal actions which are something more and different in respect to pure behaviour). Basically, AI bot’s mind (consciousness) has an inside-out and outside-in relationality that is structurally different from the human one (Lindhard 2019). There is much debate about whether robots can be friends of human beings. My position is that it is unlikely that humans will ever be able to be friends with robots. As Mathias Tistelgren (‘Can I Have a Robot Friend?’ quoted by Barlas 2019) claims, the capacity to enter into a friendship of one’s own volition is a core requirement for a relationship to be termed friendship. We also have a duty to act morally towards our friends, to treat them with due respect. To be able to do this, we need to have self-knowledge, a sense of ourselves as persons in our own right. We do not have robots who display these capacities today, nor is it a given that we ever will.

Bearing in mind the substantial differences between social relations and technological relations (Nørskov 2015), I believe however that it is possible to recognise that ‘social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons’ (Cappuccio et al. 2019: 129).

Hybridisation means that, through sustained interactions with technologies (the ‘fully networked life’), the previous modes of considering (conceiving) oneself, relationships with others, and what form to give to a social organisation in keeping with the (analogic) principle of reality are mixed with the way digital technology works, i.e. the (fictitious) principle of digital virtuality. Hybridisation means blending the real and the fictitious, the analogue and the digital. This happens, for example, when one wants to modify one’s own image or undermine another person’s reputation on social networks.

To clarify this point, I must say that I do not use the term ‘real’ in opposition to digital, but in my view the polar opposites are ‘real’ vs ‘fictitious’ and ‘digital’ vs ‘analogical’.Footnote 7 If a hacker creates a profile on Facebook, or another social network, and transmits fake news, such news has a reality, even if it is the reality of a fiction. While the story’s referent is not real, the story itself is causally efficacious, and therefore real in that regard. So, instead of contrasting a digital message to a real message, it is better to contrast a digital message with an analogue, because the analogical code implies, in fact, an effective analogy relationship between what is written or transmitted and what is true. In short, hybridisation can take place by putting in interaction, and then mixing what is real and what is fictitious, what is digital and what is analogue. Table 1 shows the different types of relations that, in different combinations, produce different forms of hybridisation.

Table 1 Types of relations whose combination generates hybridisation

Qualitative and quantitative research provides some empirical evidence of these hybridisation processes:

  • About ICTs: in the chapter ‘Twenty-First Century Thinking’, Greenfield (2009) suggests that the decline of reading in favour of fragmentary interactions, such as computer games or short messages on the internet, threatens the substance of both our neurological makeup and our social structures;

  • human persons alter their pace, rhythm, and sense of Self; they change their personal reflexivity;

  • about AI/robots: authentic human relationships are reduced to and sacrificed in favour of digital relationships (‘We expect more from technology and less from each other’); technology appeals to us most where we are most vulnerable (passions, feelings, interests, etc.);

  • digital relations erase precisely those aspects of randomness that also make human life, people, and relationships interesting, spontaneous, and metamorphic;

  • by developing new technologies, we are inevitably changing the most fundamental of human principles: our conception of self, our relationships to others, and our understanding and practice of love and death; nevertheless, we should not stop developing new technologies, but do it differently by adopting an approach of relational observation, diagnosis, and guidance (as I will say later).

In short, the processes of hybridisation of identities, human relations, and social organisations are closely connected and change together (in parallel with each other), varying from case to case due to the presence of intervening variables in specific contexts.

The Process of Hybridisation of Identities and Relations

From the point of view of critical realism, hybridisation can and must instead be considered as a morphogenetic process that leads from entities structured in a certain way to entities structured in another way. The fact remains that we should understand how to define the proprium of the human in hybridisation processes.

Lynne Rudder Baker’s theory is often cited as a solution to this problem as it replaces a ‘constitutive’ rather than a substantive (so-called ‘identifying’) view of the human.

Baker (2000) argues that what distinguishes persons from all other beings is the mind’s intentionality detached from corporeity since we, as human beings, can be fully material beings without being identical to our bodies. In her view, personhood lies in the complex mental property of first-person perspective that enables one to conceive of one’s body and mental states as one’s own. The argument is that the human mind must have a bodily support since the body–mind relation is necessary, but the type of body—and, therefore, the type of body–mind relationship—can be quite contingent; according to her, we can change the body with artefacts, provided we do so from the perspective of the first person.Footnote 8 The relation between one’s mind and one’s body is open to any possibility. Consequently, since the personality is equal to the self-thinking mind, we must then acknowledge the existence of a personality in any animal or machine that can be judged sentient and thinking, on the condition that it is aware that it is itself that thinks. In this way, the possibility of anthropomorphising robots, as well as robotising human bodies, becomes thinkable and legitimised.

Following this line of thought, some thinkers today are more and more inclined to acknowledge the existence of moral behaviours in certain species of higher primates like the chimpanzees. They are supposed to have a ‘moral intelligence’, i.e. in that they are compassionate, empathetic, altruistic, and fair (Bekoff and Pierce 2009), and, along the same lines, they recognise the possibility that special robots endowed with an artificial morality might exist (Pana 2006). It seems to me that this perspective is totally at odds with a realistic social ontology for which human relationality—between body and mind, as well as in social life with others (the two are closely related)—has qualities and properties that cannot be assimilated with the relationships that certain species of animals or ultrasophisticated intelligent machines can have. On the whole, Baker’s theory is inadequate to the task of accounting for the possibilities and the limits of the processes of hybridisation because it does not account for the relational constitution of identities and social forms.

The hybridisation of human relations, paradoxically, is due to the logic inherent in such relationships, which, by definition, must be open to the world and cannot remain closed in self-referentiality, as artificial machines do. In interacting repeatedly with machines (AI/robots), agents can incorporate certain aspects of the communication logic of the machines in their way of relating to others and the world. However, we cannot equate the relations between humans with those between humans and robots. How can one not see that the qualitative difference between these relations, for example, when someone says that the robot is ‘my best friend’, reduces the relationship to pure communication (i.e. to communication and only to communication, as Luhmann 1995 states)? This is an unjustified and unjustifiable reduction because communication always takes place within a concrete relationship and takes its meaning, qualities, and properties from the kind of relationship in which it occurs, whereas the kind of relationship depends on the kind of subjects that are in communication.

The problems caused by DTM (ICTs and robotics) arise when sociability is entrusted to algorithms. The introduction of the sentient robot changes the context and the relationships between human subjects as well as the form of a social organisation.

Let us take the example of a social structure of a sports type like that of a football team and a football match. How does the behaviour of a football team that is equipped with a goalkeeper-robot capable of parrying all rolls change and if two teams agree to play a game in which each of them has a robot as a goalkeeper, what will be the consequences on human behaviour?

There cannot be a ‘we believe’ or a relational good between humans and robots for the simple fact that the supervenience human–robot relation is ontologically different from the supervenient relationship between human beings.Footnote 9

The partial or total hybridisation cycle of relationships is selective and stratified on the basis of: (1) how the personal and relational reflexivity of agents operates, given the fact that reflexivity on these processes is necessary to get to know each other through one’s own ‘relational self-construal’ (Cross and Morris 2003) and (2) how the reflectivityFootnote 10 of the network or organisational system in which the agents are inserted operates.

We can see this by using the morphogenetic scheme (Fig.2), which describes how the hybridisation processes depend on the reflexivity that the subjects and their social network exert on the different forms of human enhancement.

Fig. 2
figure 2

The morphogenetic cycle through which identities and social relations are hybridised. Source: Designed by the author

Given an organisational context (or network) in which human persons must relate themselves to a DTM of sentient machines, social relationships can be hybridised in various ways, and with different intensity, depending on whether the human person adopts:

  1. (a)

    A reflexivity that is purely dependent on the machine; in practice, the agent person relates to the machine by identifying herself with the digital code, which means that she ‘connects’ to the machine without establishing a real relationship (connection is not a social relationship); it happens, for instance, when people identify themselves with their Facebook profile;

  2. (b)

    A reflexivity that is autonomous with respect to the machine; the machine is used as a simple tool, and the human–robot relationship follows basically an analogic code;

  3. (c)

    A critical reflexivity (meta-reflexivity) as a use of the machine, a use that continuously redefines the purpose of interaction, in order to establish a more satisfying relationship; this mode of reflexivity reentersFootnote 11 the distinction digital/analogic into what has emerged in previous actions and their outcomes;

  4. (d)

    An impeded reflexivity that is caused by an absorbing routine identification of the agent with the machine;

  5. (e)

    A reflexivity that is fractured due to the fact that the agent combines the digital code and the analogical code in a casual and fuzzy way.

Take the case of domestic ICTs. They constitute reference points around which identity, gender, and intersubjectivity are articulated, constructed, negotiated, and contested. They are points through which people construct and express definitions of selves and other. As Lally further explained: ‘The development of the human subject (individual and collective) takes place through a progressive series of processes of externalisation (or self-alienation) and sublation (reabsorption or reincorporation). Human subjects and the (material and immaterial) objects of their sociocultural environment form a subject-object relation which is mutually evolving, and through which they form a recursively defined, irreducible entity’ (Lally 2002: 32).

These subject-object sociotechnical associations constitute the material culture of home life; they are bundled together with the affective flows of human relations and the emergence of a lifeworld in which images and emotions play a crucial role in the hybridisation of relationships because it is through them that the subjects identify with the logic of virtual/digital relationships (LaGrandeur 2015).

The Hybridisations of Identities and Social Relationships Are Interdependent

The hybridisations of human identity and social relations through DTM are connected to each other because the human being is a relational subject. The dynamics of mutual influence between identity and relations in the hybridisation processes are not linear, but proceed along very different paths.

We see the different hybridisation paths in Fig. 3.

  1. 1.

    Arrow 1 indicates the two-way interactions between the natural being and the use of technology (regardless of the social context). From a phenomenological point of view, the relational context is always there, but in many cases the agents do not take this into account, as often happens in medical settings and biological laboratories.

    Examples of type 1 hybridisations are those widely used in medicine, where they involve checking whether the technology helps repair damaged organs or cure diseases. Baker (2013) generally refers to this type, obviously considering the individual as natural beings. But even when she speaks of human enhancement in the proper sense, i.e. as making someone or something ‘more than normal’ by applying digital technological devices that are well beyond therapy, she does not consider the implications on the level of the social order. Since the relational order (context) is practically ignored in her theory, Baker’s vision remains restricted to a mentalised interpretation of the hybridisation of the individual.

  2. 2.

    Arrow 2 indicates two-way interactions between the use of technology and sociality. In this case, the actor/agent is a virtual person who disregards her body-mind constitution. An example of a type 2 hybridisation is the phrase: ‘I do not use the blog; I am the blog’. Hybridisation happens through the formation of a relationship in which Ego’s identity is in his blog. There is an identification of subject and object: no longer, ‘I make my blog in order to communicate’, but, ‘I am my communication’ (Stichweh 2000). The logic of the blog becomes part of the logic of the Subject, who reflects according to the logic of what was once an instrumental object (a means) and now has become his way of relating (a norm), which reflects a value placed in the communication itself, in the presence of an undetermined goal.

    The dialogue between the ‘I’ (i.e. the Self that dialogues with itself), its ‘Me’ (i.e. what the ‘I’ has done in the past), and its ‘You’ (i.e. what the ‘I’ is willing to do in its future)Footnote 12 takes place within an interactive time register, in which time has no duration (it is neither relational nor symbolic, but purely interactional) (Donati 2011). The temporal history of the body–mind complex, its stock of knowledge (Elster 2017) does not enter the process, except for the aspects in which it has been mentalised. The identification of the human person with the blog is the outcome of the process of mentalisation, which corresponds to the fact that an AI has become part of the Subject’s Mind, forging its conscience.

  3. 3.

    Arrow 3 indicates the two-way interactions between the agent as natural being and sociality. In this case, the individual acts regardless of the technology, and, therefore, there is no hybridisation. This is the case, for example, of families of elderly people who do not use digital technologies or of voluntary organisations that take care of the poor and fragile without resorting to any technological tool.

  4. 4.

    Arrow 4 indicates those processes in which the agents are involved in all three orders of reality. Their body–mind complex and their sociality are mediated by technology, which is what happens ‘normally’, in fact, in any institution and organisation. The kind and degree of hybridisation depends on the network structure in which the agency is embedded.

Fig. 3
figure 3

The hybridisation of the human person (her identity and social relations) due to the influence of DTM on the natural, practical, and social realities. Source: Designed by the author

It is important to emphasise that hybridisation of social identities and organisations through the use of technology happens by involving the mind–body complex in the context of relationships, without the mind being separated from the body. The mind of agents/actors is modified through the perception and activation of those social relationships that are possible only through the body.

Victoria Pitts-Taylor offers an account of how the mind works on the basis of a brain that is ‘complex and multiple, rather than determined and determining’ (Pitts-Taylor 2016: 8). Drawing on work by feminists, queer theorists, and disability theorists, she offers an understanding of bodies and cognition that can incorporate the cognitive differences that result from differences in embodiment and that can recognise the social shaping of both bodies and cognition. She notes that scientists have discovered that some neurons seem to respond both to our performance of an action and to our observation of others performing that action. These mirror neurons have been theorised to underlie social relationships, especially ‘mind reading’ and empathy. We need to do a better job of recognising the complexity and variety of social relationships. In particular, the assumption that the existence of mirror neurons shows that we are naturally in tune with others and empathetic to them leaves social neuroscientists unable to account for why our empathy for others is so often selective and so often fails. This is an example of how social neuroscience can take the embodiment of brain and social relationship seriously,Footnote 13 differently from those attempts to create digital identities as avatars that simulate the human without being human (for example, the Soul Machines company), thus producing a morphic sociability devoid of real humanity (Pitts-Taylor 2016).

Let me now move from the micro to the meso and macro levels of the social fabric, to consider the institutional and organisational hybridisation processes.

The Emergence of Hybridised Institutions and Organisations

I call hybridised organisations and institutions those that pursue a social configuration in which advanced digital technologies are conferred a certain degree (min/max) of decisional and operational autonomy. Technology (AI/robotics) takes on an autonomous and decisive role in managing the organisation of roles and relations between the members of the organisation. The digital logic with which hybrid organisations operate is that of increasing opportunities, conceived, however, not within the framework of their relational implications, but according to the maximum useful variability in terms of system efficiency.

The relational approach to social organisations can show why and how AI and robots cannot replace humans because of the specific generative character of interhuman relations. In fact, the utilitarianism of efficiency refers to the relationships between an actor/agent and ‘things’, while the search for relational goods (Donati 2019)—or the avoidance of relational evils—implies relationships between human beings, which, unlike algorithms, are generative of meta-reflexive solutions for problems of human relationships.Footnote 14

Let me give an example of how an algorithm can generate relational evils. On 21 November 2017, the algorithm of the multinational company Ikea fires a worker of its megastore in a small town near Milan. Marica, a 39-year-old mother separated from her husband with two young children, one of whom is disabled, is fired because she does not observe the new work shift assigned to her by the algorithm. The algorithm has ordered her to show up at 7 a.m., and, instead, she arrives at 9 o’clock according to the old work shift because she has to look after the children, and, in particular, she has to take the disabled child to therapy. Previously, the woman had explained to the manager that she could not work that shift, and the manager said that he would consider her situation, but the algorithm worked on its own and fired her. The company did not review its decision, and, instead, continued to dismiss other workers on the grounds that they did not comply with the indications of the algorithm.

Undoubtedly, it cannot be said that the algorithm has not been proven to have its own ‘personal’ decision-making ability (some call it ‘electronic person’), but it was certainly not a relational subject. Clearly, the algorithm simply followed its procedural rules, neglecting the needs of people and their relational context, which were unpredictable for him. If so, we can say that the algorithm is not a person, given the fact that a person is an ‘individual-in-relation’ (relationality being constitutive of the individual), although it is presented as such by management. The algorithm’s personality is a convenient managerial fiction.

In the perspective of the future, in my opinion, in principle, it will be possible to build more sophisticated AI/robots that can take into account people’s needs and their relationships. However, in addition to modifying the instructions provided to the algorithm accordingly, it will be always necessary to supervise its operations by a management that, by adopting a relational steering approach, must be able to deal with the individual problematic cases and the complexity and diversity of contingencies that the algorithm cannot handle. Without such relational steering, the hybridisation of organisational relationships due to increasing automation will only mean exploiting people and dismissing them by blaming the algorithm for making inhuman decisions and apologising for not being able to do otherwise.

From a more general view, as Teubner writes, ‘Today, with the massive emergence of virtual enterprises, strategic networks, organisational hybrids, outsourcing and other forms of vertical disaggregation, franchising and just-in-time arrangements, intranets and extranets, the distinction of hierarchies and markets is apparently breaking down. The boundaries of formal organisations are blurring. This holds true for the boundaries of administration (hierarchies), of finance (assets and self-financing), of integration (organisational goals and shared norms and values) and of social relations (members and outside partners). In formal organisations, membership becomes ambiguous, geographical boundaries do not matter much anymore, hierarchies are flattened, functional differentiation and product lines are dissolved’ (Teubner 2002: 311).

Hybrids raise problems of conflict between divergent normative arrangements. As a strategy to deal with these changes, Teubner recommends a ‘polycontexturality which combines heterarchy with an overarching unity’ (Teubner 2002: 331) assuming that this organisational solution would represent the new institutional logic capable of avoiding collisions between spheres ruled by incompatible norms.

I take Teubner’s proposal as an example of an attempt to preserve the human (with its regulatory requirements) alongside the acceptance of the entry of new hybrids, through general rules that allow different norms to coexist in different domains (Teubner 2006b). Unitas multiplex is his keyword for preserving the integration of society, supposing that human beings and actants, human relations and artificial relations, could coexist within a neo-functional differentiation architecture under the aegis of the DTM.

I have serious doubts about the tenability of this perspective (although it worked in the seventeenth century as the Hobbesian solution to the problem of social order). I think that it is an impracticable solution in a networked society governed by the DTM. In any case, it does not work for a ‘society of the human’. As Hobbesian (Leibnizian) rationalism, Teubner’s idea of a constitutionalisation of the private hybridised spheres does not address the issues of the relations between human and nonhuman, and the relations between divergent normative spheres. Teubner’s perspective simply avoids the relational issue. In short, it has all the limitations and defects of a multicultural doctrine that ignores the reality of what lies in between opposite spheres that have incompatible normative orders. It avoids the issue of how far the hybridisation processes can go and to what extent they can affect the human.

The same difficulty is present in the Luhmannian theory that seems to do a good job of interpreting the hybridisation processes insofar as it places all cultures under the aegis of a conception of society as an ‘operating system’ (Clam 2000). According to Luhmann (1995), all systems, organisations, and interactions are forced to use a binary functional code that is precisely the one through which the DTM proceeds. Hybridisation follows a functional code. For this reason, he believes that the humanism of the old Europe is unsustainable and, therefore, sees no other way than that of considering the human (especially the human of the Western tradition) as a residual entity fluctuating in the environment of the DTM. Are there no alternatives? I think that there are. We must examine the possible scenarios.

Three Scenarios Dealing with the Processes of Hybridisation

The digital transformation of society is destined to produce different types of hybrids through different types of social morphogenesis. I would like to summarise them in three scenarios: adaptive morphogenesis, turbulent morphogenesis, and relationally steered morphogenesis.

  1. 1.

    Adaptive MG producing hybrids by trial and error: this is the scenario of a society that adapts itself to the hybridisation processes produced by DTM in an opportunistic way; it is configured as an afinalistic infosphere—without preestablished goals, as in Floridi (2015)—that tries to use the technologies knowing that they have an ambivalent character; they allow new opportunities but also involve new constraints and possible pathologies; therefore, it is essentially engaged in developing self-control tools to limit emerging harms (theories of harm reduction).

  2. 2.

    Turbulent MG favouring mutations: this is the scenario of a society that operates for the development of any form of hybridisation; it becomes ‘normal’ to anthropomorphise the robots, as well as to robotise the human body: in principal, it is run by anormativity and anomie (lack of presupposed moral norms) and openness to mutations, understood as positive factors of ‘progress’ (theory of singularity); it operates through ceaseless, unrepeatable, intermingling processes of relational flows with ‘confluence of relating’ (Shotter 2012).

  3. 3.

    Relationally steered MG aiming to configure technologies in order to favour relational goods: this is the scenario of a society that tries to guide the interactions between human subjects and technologies by distinguishing between humanising and non-humanising forms of hybridisation. The aim is to produce social forms in which the technologies are used reflexively in order to serve the creation of relational goods. This requires that producers and consumers of technologies work together interactively, that is, that they are co-producers and partners in the design and use of technologies, careful to ensure that technologies do not completely absorb or replace human social relations, but enrich them. This alternative is certainly much harder to pursue than harm reduction, but it is not impossible, and it is the one that leads to good life.Footnote 15

Hybridisation cancels every dualism. In particular, it erases the dualism between the system and the lifeworld (theorised by Habermas) and replaces it with a complex relational system in which each action/communication must choose between different causal mechanisms (Elder-Vass 2018).

To be human, technological enhancement must be able not only to distinguish between the different causal mechanisms, but also to choose the most productive social relationships that generate relational goods. New technologies generate not only unemployment, as many claim. They also release energy for the development of many jobs in the field of virtual reality and make it possible to put human work into those activities that have a high content of care, such as education, social assistance, and health, or a high content of cultural creativity.

Laaser and Bolton (2017) have shown that the introduction of new technologies associated with the advance of performance management practices has eroded the ethics of the care approach in banking organisations. Under electronic performance management monitoring in bank branches, in particular, co-worker relationships have become increasingly objectified, resulting in disconnected and conflict-ridden forms of engagement. This research reveals the multilayered and necessarily complex nature of co-worker relationships in a changing, technologically driven work environment and highlights the necessity for people to defend the capacity to care for others from the erosive tendencies of individualised processes. Within the relational approach, this entails assessing the way in which an organisation uses AI/robots to enhance human relations from the viewpoint of what I call ‘ODG systems’ aimed at the relational steering of digitised organisations.Footnote 16

ODG systems are based on the sequence: Relational observation (O) → Relational diagnosis (D) → Relational guidance (G). The principles according to which the ODG systems operate are modalities that orient those who have the responsibility of guiding a network of relations among subjects to operate interactively on the basis of cooperative rules that allows the subjects, supported by AI/robots, to produce relational goods. The agency is made by all the parties, as in an orchestra or a sports team, where everyone follows a cooperative standard that is used to continually regenerate a non-hierarchical and non-individualistic social structure, and consequently modifies the behaviour of the individual subjects, who are driven to generate relational goods. The cooperative norm is obviously inserted as a basic standard in the AI/robot that supports the agents of the network.

Now let’s see some details to explain the acronym ODG.

(O) Relational observation aims to define the problem to which AI must respond in terms of a problem that depends on a certain relational context. Therefore, it favours the meso level (in which relational goods can be produced). (D) Relational diagnosis aims to define the satisfactory (or not satisfactory) conditions with respect to the effects produced by the way the AI works on the relational context (i.e. whether the AI contributes to produce relational goods instead of relational evils). (G) Relational guidance aims to modify the AI and its way of working to support a relational context that can be mastered by people in order to generate relational goods.

An example of product innovation can be that of systems that try to regulate the AIs of self-driving cars. The AI must be constructed in such a way as to take into account the main parameters of the relational context in which it operates. The AI must see objects and people around, and assess their relationships with respect to those who sit in the driver’s seat to put him in a position to intervene on situations that present a high contingency. Relational observation implies the ability of AI to calculate the relationships given in a context and those possible in its short-medium-range evolution. Relational diagnosis concerns the ability to perceive possible clashes in case the relationships with objects and people can become dangerous while the car is on the street. Relational guidance means the ability to regulate these relationships in order to make driving safer.

At the organisation level, we can consider any company that uses AI/robots in order to produce satisfactory goods for customers. The AI/robots that are used must have similar characteristics to those just mentioned for the self-driving car. It is up to those who organise the production, distribution, and sale of business products to configure the AI/robots so that they have the ability to contextually relate and evaluate the progress of relationships in various sectors within the company and in the context of sales and of consumption. It is not enough to improve the cognitive intelligence of the single AI/robot. It is necessary to build AI/robots that are able to regulate the progress of the network of relations between producers, distributors, and consumers of company products.

I do not know practical examples already in place, but the idea of ‘distributed responsibility’ among all the actors in the network that produces, distributes, and uses the goods produced goes in this line. It requires that the AI/robots be constructed and monitored within a project of (1) observation of their relational work, (2) with the ability to diagnose deviations from satisfactory procedures and results, and (3) the orientation of the management to operate according to a relational guidance program.

The Issue of Social Regulations

The process of hybridisation of human relations and the relationships between humans and the environment due to digital technologies proceed rapidly with the succession of generations. Everyone feels the need to control the undesirable and perverse effects of technologies on human beings and their identities. However, it is not just about checking. It is, above all, a matter of redefining the relationality of the human environment, that is, of integral ecology.

The EU high-level expert group on AI has proposed seven essentials for achieving trustworthy AI (EU high-level expert group on AI 2018). Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

  • Privacy and data governance : Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

  • Transparency : The traceability of AI systems should be ensured.

  • Diversity, non-discrimination, and fairness : AI systems should consider the whole range of human abilities, skills, and requirements, and ensure accessibility.

  • Societal and environmental well-being : AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

  • Accountability : Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

As for the use of ICTs, the ethical principles proposed by the ‘European Civil Law Rules in Robotics’ (Nevejans 2016), which should guarantee greater security for humanity, have been stated in the following terms:

  • Protect people from any possible damage caused by ICTs.

  • Ensure that the person is always able to use the ICTs without being obliged to perform what they have requested.

  • Protecting humanity from violations of privacy committed by ICTs.

  • Maintain control over information captured and processed by ICTs.

  • To avoid that for certain categories of people there can be a sense of alienation towards ICTs.

  • Prevent the use of ICTs to promote the loss of social ties.

  • Guaranteeing equal opportunities to access ICTs.

  • Control the use of technologies that tend to modify the physical and mental characteristics of the human person.

The House of Lords (2018) has put forward five overarching principles for an AI (ethical) Code (of conduct):

  • Artificial intelligence should be developed for the common good and benefit of humanity.

  • Artificial intelligence should operate on principles of intelligibility and fairness.

  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families, or communities.

  • All citizens have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside artificial intelligence.

  • The autonomous power to hurt, destroy, or deceive human beings should never be vested in artificial intelligence.

It is evident that all these indications, suggestions, and statements of principle are important, but they risk being just like wishful thinking. In my opinion, these general principles (1) are difficult to implement in the absence of a clear anthropology and (2) are not at all sufficient to prevent the negative and perverse effects of the DTM, because they do not sufficiently take into account the relational nature of the human person and the social formations in which her personality develops. To avoid the new forms of alienation generated by the hybridisation of the human we need a new relational thought that is able to re-enter the specific distinctions of the human, its qualities and properties, into people’s identities and social relations.

Conclusions

With the fourth technological revolution, social identities, relations, and organisations are forced to take shape in the environment of a Digital Matrix that works through a symbolic code that tends to replace the ontological, ideal, moral, and theological matrices that have structured societies in the past. As a form of Technological-Gnosis, its peculiarity is that of making the boundaries between human and non-human labile and crossable in every way in order to foster hybrids. Hybrids, however, are not random and purely contingent entities. They stem from complex interactional networks in which social relations are mediated by the DTM. The processes of hybridisation are selective and stratified according to the ways in which the human/non-human distinction is thought and practised. Three possible scenarios of hybridisation can be outlined: adaptive, turbulent, and relationally steered.

As a criterion for evaluating hybridisation processes, I have proposed assessing how digital technologies mediate the transformations of people’s mind-body identities with their sociality so as to assess when such mediations produce those relational goods that lead to a virtuous human fulfilment or, instead, relational evils.

The justification of this perspective is based on the fact that human beings are at the same time the creators of society and its product. They are the parents and the children of society. As a consequence, it is not the physical world (Anthropocene) and/or the artificial world (AI/robots) that can produce the human being as a human being, but society, which, from the point of view of my relational sociology, consists of relationships, i.e. ‘is’ (and not ‘has’) relationships. Therefore, the quality and causal property of what is properly human comes into existence, emerges and develops only in social life, that is, only in and through people’s sociability, which is however embedded in the practical order and has roots in the natural order (see Fig. 3). In fact, only in sociality does nature exist for the human being as a bond with the other human being. The vital element of human reality lies in the relationality, good or bad, that connects Ego to Other and Other to Ego. The proper humanisation of the person is achieved only through the sociality that can be enjoyed by generating those relational goods in which the naturalism of the human being and his technological enhancement complement each other.