The Unbearable Disembodiedness of Cognitive Machines

Digital systems make up nowadays the communication and social infrastructure and ﬁ ll every parcel of space and time, affecting our lives both professionally and personally. However, these “ cognitive machines ” are completely detached from the human nature, whose comprehension is beyond their capabilities. It is therefore our duty to ensure their actions respect human rights and values of a democratic society. Education is one of the main tools to attain this goal, and a generalized preparation in the scienti ﬁ c basis of the digital technologies is a required element. Moreover, it is fundamental to understand why the digital automation has a nature completely different from the traditional industrial one and to develop an appreciation for human and social viewpoints in the development and deployment of digital systems. These are the key issues considered in this chapter. This chapter discusses the theme of informatics education in the perspective of Digital Humanism. Given the breadth of impact on our society of the digital transformation, it is important to assume a historical viewpoint so as to better understand how to develop the most appropriate educational approach to accompany this epochal challenge.

These are automatic systems thatby manipulating signs they ignore the meaning of, according to instructions they ignore the meaning oftransform data that have a sense for human beings, in a way that is significant to them.
The comprehension of this aspect is crucial to educate for Digital Humanism appropriately. Informatics can be aptly defined the science of automated processing of representations since it does not deal with "concrete" objects but with their representations, which are built by means of a set of characters taken from a finite alphabet. When human beings look at a representation, they usually tend to associate it to a meaning, which very often depends on the subject and is shaped by shared social and cultural conventions. For example, the sequence of characters "camera" will evoke one meaning to English speakers and a different one to Italians. Devices produced by the technological developments of informatics, instead, deal with representations without any comprehension of their meaning. Moreover, they process these representations, that is, they receive input representations and produce output representations (maybe through the production of a very long series of intermediate or ancillary representations) by acting in a pure automatic (or mechanical) way (Denning 2017;Nardelli 2019). Again, they do not have any comprehension of the meaning of the transformation processes they execute. However, in the end, these machines carry out operations that, considered from the viewpoint of human beings, are of a cognitive nature and meaningful to them.
These machines are now pervading the entire society, and this represents a real revolution, the "informatics revolution"called the third "revolution of power relations" (Nardelli 2018), because for the first time in the history of humanity, cognitive functions are carried out by machines. This third revolution "breaks" the power of human intelligence, creating artifacts that can mechanically replicate cognitive actions, which until now were a characteristic of people.
Every scholar in our field is aware that these cognitive machines are characterized by a meaningless process of transformation (meaningless from the point of view of the mechanical subject carrying out the transformation) which produces a meaningful outcome (meaningful in the eyes of the human observer of the transformation). However, outsiders and common people usually neglect this. To properly understand the challenges education for Digital Humanism faces, a full and deep comprehension of this aspect is absolutely required. That is why it is important to discuss this revolution in the historical perspective of other equally disruptive revolutions in the history of humankind, to understand similarities and differences among them.
to produce texts in a faster and cheaper way, and a social one, because it made possible a more widespread circulation of knowledge. Ultimately, what happened was the first revolution in power relations: authority was no more bound to the spoken word; it was no longer necessary to be in a certain place at a certain time in order to know and learn from the living voice of the master. Knowledge always remains a great power, but now this power is not confined any more to the people who possess it or to those who are able to be close to them in time and space. The replicability of the text implies the replicability at a distance of time and space of the knowledge contained in it. All those who can read can now have access to knowledge. This set in motion epochal social changes: the diffusion of scientific, legal, and literary knowledge gave a huge impact to the evolution of society, which became increasingly democratic.
In fact, in the space of two and a half centuries, almost 800 million books printed in Europe caused an irreversible process of social evolution. Scientific knowledge, revolutionized by the Galilean method, thanks to the printing press, spread throughout Europe and constituted one of the enablers of the subsequent revolution, the industrial one, identified as the second revolution in power relations.
Started in the eighteenth century, this was equally disruptive: the availability of industrial machines made the physical work of people replicable. Human arms now are no longer needed because machines operate in their place. A technical revolution is achieved, because artifacts are replicated faster and in the absence of human beings. Machines can produce day and night without getting tired, and they can even produce other machines. They are amplifier and enhancer of the physical capabilities of human beings. A social revolution is obtained: physical limitations to movement and action are broken down. A single person can move huge quantities of dirt with a bulldozer, travel quickly with a car, and talk to anyone in the world through a telephone. Evolution and progress of human society are therefore further accelerated by the possibility of producing physical objects faster and more effectively, not to mention the consequences in terms of transporting people and things. The power relation that is revolutionized in this case is that between man and nature: humanity subjugates nature and overcomes its limits. One can quickly cross the seas, sail the skies, harness water and fire, and move mountains.
The printing press revolution had given humanity an extra gear on the immaterial level of information; the Industrial Revolution did the same for the material sphere. The world is filled with "physical artifacts" (the industrial machines) that begin to affect the nature of the planet in an extensive and deep way.
Then, in the middle of the twentieth century, after the realization of about 800 billion industrial machines, 1 the third revolution in power relations, that of information technology (IT), slowly begins. At first, it seems to be nothing more than an evolved variant of the automation of physical work and production processes caused by the Industrial Revolution, but after a few decades, we begin to understand that it is much more than that, because it affects the cognitive level and not the physical one. We are no longer replicating the static knowledge brought by spoken words and the physical strength of people and animals, but that "actionable knowledge" which is the real engine of development and progress. This expression denotes that kind of knowledge which is not just a static representation of facts and relationships but also a continuous processing of data exchanged dynamically and interactively between a subject and the surrounding context.
Because of the informatics revolution, this actionable knowledge (i.e., knowledge ready to be put into action) is reproduced and disseminated in the form of software programs, which can then be adapted, combined, and modified according to specific local needs. The nature of the artifacts, of the machines we produce, has changed. We no more have concrete machines, made by many physical substances: we now produce immaterial machines, made by abstract concepts, ultimately boiling down to configurations of zeroes and ones. We have the "digital machines," borndue to the seminal work of Alan Turing (Turing 1936)as pure mathematical objects, capable of computing any function a person can compute, and which can be made concrete by physically representing them under some form, it does not matter which. Indeed, beyond the standard implementation of digital machines by using levels of voltage that, in some physical electric circuit, give substance to their abstract configurations, we also have purely mechanical implementations, with levers and gears, or hydraulic ones.
Even though these digital machines clearly require some physical substrate to be able to operate, they are no more physical artifacts. They are "dynamic cognitive artifacts," frozen action that is unlocked by its execution in a computer and generates knowledge as a result of that execution. Static knowledge of books becomes dynamic knowledge in programs. Knowledge capable of automatically producing new knowledge without human intervention. Therefore, most appropriately, they have been defined "cognitive machines" (Nardelli 2018).

Cognitive Machines
These machines are a reminiscence of those that, in the course of the Industrial Revolution, made possible the transformation from the agricultural society to the industrial one. Actually, they are different and much more powerful. Industrial machines are amplifiers of the physical strength of man; digital machines produced by the informatics revolution are cognitive machines (or "knowledge machines"), amplifiers and enhancers of people's cognitive functions. They are devices that boost the capabilities of that organ whose function is the distinctive trait of the human being.
On the one hand, we have a technical revolution, that is, faster data processing; on the other hand, we also have a social revolution, that is, the generation of new knowledge. In this scenario, what changes is the power relation between human intelligence and machines. Humankind has always been, throughout the history, the master of its machines. For the first time, this supremacy is challenged.
Cognitive activities that only humans, until recently, were able to perform are now within the reach of cognitive machines. They started with simple things, for example, sorting lists of names, but now they can recognize if a fruit is ripe or if a fabric has defects, just to cite a couple of examples enabled by that part of informatics that goes under the name of artificial intelligence. Certain cognitive activities are no longer the exclusive domain of human beings: it has already happened in a large set of chessboard games (checkers, chess, go, etc.), standard fields for measuring intelligence, where now the computer regularly beats the world champions. It is happening in many work activities that were once the exclusive prerogative of people and where now the so-called bots, computer-based systems based on artificial intelligence techniques, are widely used.
There are at least two issues, though, whose analysis is essential in the light of the educational viewpoint discussed in this chapter.
The first issue is that these cognitive machines have neither the flexibility nor the adaptability to change their way of operating when the context they work in changes. It is true that modern "machine learning"-based approaches give some possibility for them to "sense" changes in their environment and to "adapt" their actions. However, this adaptation space has its own limits. Designers must somehow have foreseen all possible future scenarios of changes, in a way or another. People are inherently capable of learning what they do not know (whereas cognitive machines can only learn what they were designed for) and have learned, through millions of years of evolution, to flexibly adapt to changes in the environment of unforeseen nature (while knowledge machines canonce againonly adapt to changes of foreseen types). We cannot therefore let them work on their own, unless they operate in contexts where we are completely sure that everything has been taken into account. Games are a paradigmatic example of these scenarios. These cognitive machines are automatic mechanisms, giant clocks that tend to behave more or less always in the same way (or within the designed guidelines for "learning" new behaviors). This is why digital transformation often fails: because people think that, once they have built a computer-based system, the work is completed. Instead, since no context is static and immutable, IT systems not accompanied by people able to adapt them to the evolution of operational scenarios are doomed to failure.
The second problem is that these cognitive machines are completely detached from what it means to be human beings. Someone can see it as a virtue, while on the contrary it is a huge flaw. There is no possibility of determining a single best way of making decisions. Those who think that algorithms can govern the human society in a way that is the best for everyone are deluded (or have hidden interests). Since the birth of the first forms of human society, the task of politics has been to find a synthesis between the conflicting needs that always exist in every group of people. Moreover, the production of this synthesis requires a full consideration of our human nature. The only intelligence that can make decisions appropriate to this context is the embodied intelligence of people, not the incorporeal artificial intelligence of cognitive machines. This does not imply there is not a role for cognitive machines.
Their use should remain confined to those of powerful personal assistants, relieving us from the more repetitive intellectual work, helping us in not making mistakes due to fatigue or oversight, and without leaking our personal data in the wild. People have always to remain in control, and the final decisions, above at all those affecting directly or indirectly other individuals and their relations, should always be taken by human beings. To discuss a current topic, it is understandable to think that, to some degrees, final decision of judges in routine cases may be affected by extrajudicial elements, unconscious bias and contingent situations and emotions. After all, even well-trained judges are anyhow fallible human beings. However, speculating on the basis of data correlation, as done in Danziger et al. (2011), that after lunch judges tend to be more benevolent is wrong. A more careful analysis highlighted other organizational causes for this effect (Weinshall-Margel and Shapard 2011). A cognitive machine just learning from data without a thorough understanding of the entire process would have completely missed the point. That is why the decision in France to forbid analytics on judges' activity is well taken (Artificial Lawyer 2019). Because incorporeal decision systems convert a partial description of what happened in the past in a rigid prescription of how to behave in the future, stealing human beings of their most precious and more characteristic qualities, free will.
Cognitive machines are certainly useful for the development of human society. They will spread more and more, changing the kind of work executed by people. This has already happened in the past: in the nineteenth century, more than 90% of the workforce was employed in agriculture; now it is less than 10%. It is therefore of the utmost importance that each person is appropriately educated and trained in the conceptual foundations of the scientific discipline that makes possible the construction of such cognitive machines. Only thus, everyone will be able to understand the difference between what they can do and what they cannot and should not do.

A Broader Educational Horizon
Education on the principles of informatics should start since the early years in school (Caspersen et al. 2019). The key vision that a digital computing system operates without any comprehension, by the system itself, of what is processed and how it is processed, needs to accompany the entire education process. Moreover, it should always go hand in hand with the reflection that the process of modeling reality in terms of digital data and processing them by means of algorithms is a human activity. As such, it may be affected by prejudice and ignorance, both of whom may, at times, be unconscious or unknown.
Only in such a way, in fact, it will be possible to understand that any choice, since the very first ones regarding which elements to represent and how to represent them, to the ones deciding the rules for the processing itself, is the result of a human decision process and is therefore devoid of the absolute objectivity that too often is associated to algorithmic decision processes.
Moreover, as Giuseppe Longo has observed, the fundamental distinction introduced by Turing between hardware and software is a "computational folly" when applied to living entities and society (Longo 2018). Firstly, because in the biological world there is not such a distinction between hardware and software. DNA, the code of life, constitutes its own hardware. The rewriting of digital representations happening in digital machines is different from the transcription from DNA to RNA. Secondly, because fluctuations are completely absent in the discrete world where Turing machines operate, while they play an essential role in complex dynamical systems all around us. As first noted by Henri Poincaré, this possibly results in the unpredictability of their evolution, even though they are deterministically defined (Poincaré 1892). Thirdly, because every software is only able to represent an abstraction of a real phenomenon. While this abstraction can provide valuable indications regarding its dynamics, considering the representation as the phenomenon itselfas it happens with scientists like Stephen Wolfram, who says: "The Universe is a huge Turing machine" (Wolfram 2013)is as wrong as mistaking the map for the territory. And, finally, digital systems, once set in the same starting conditions within a same context, will identically compute always the same result, even for those complex systems where (as Poincaré proved) this is physically absurd. "Computer networks and databases, if considered as an ultimate tool for knowledge or as an image of the world" writes Longo "live in the nightmare of exact knowledge by pure counting, of unshakable certainty by exact iteration, and of a 'final solution' of all scientific problems." To complete this discussion, note that the most recent developments in the theory of computation (van Leeuwen and Wiedermann 2012) have shown that models more powerful than the Turing machine 2 are needed to model the interactivity with external agents, the evolution of the machine's initial program, and the unbounded operations over time, three aspects that characterize modern computing systems. Note, though, that while these approaches offer more appropriate formal models to describe what happens in the modern world of continuously running digital devices interacting with people, they require the introduction of components that are Turinguncomputable. Therefore, understanding the difference between the (mechanical) role played by digital computing systems and the (uncomputable) behavior exhibited by human beings is essential to be able to use cognitive machines to increase the well-being of human beings and not to oppress them.
On the other side, we have moved a large part of our life in the realm where these disembodied cognitive machines rule. Consequently, our existence now develops not only along the usual relational dimensions (economical, juridical, cultural, etc.) but articulates also in this incorporeal dimension of "representations" that is more and more relevant, from a social point of view. Humanity has been recording data about the world for thousands if not tens of thousands of years. However, from being a completely negligible component of our existence, representations of data have become a relevant and important part of it. As the health emergency of 2020 has unfortunately taught us, we cannot disregard them any longer. They have become an integral and constitutive component of our personal and social life. Hence, the necessity of giving protection to people rights not only for what regards their body and their spirit but also to their digital projections (Nardelli 2020).
As a side note, note that the disembodiedness of cognitive machines has a dual counterpart in the fact that this digital dimension of our existence is populated by "life forms" which we have not the sensor to be aware of. Digital viruses and worms, which are not benign towards our "digital self," much like their biological counterparts are not benevolent to our physical bodies, continue to spread at an alarming rate without us being able to counter them effectively. Indeed, we would need the digital counterpart of those hygiene rules that such a big role have had in the improvement of living conditions in the twentieth century (Corradini and Nardelli 2017). Once again, it is only through education that we can make the difference, and it has to start as early as possible.
While the general education of citizens happening in school should be focused on the above principles, when considering the tertiary education level, something more is needed.
We need to prepare our students in a way similar to how we train medical doctors. In the early years, they study the scientific basis of their field: physics, chemistry, and biology. In this context, universal and deterministic laws apply. Then, as they progress in their educational path, aspiring doctors begin the study of the "systems" that make up human beings (motor, nervous, circulatory, etc.), thus learning to temper and combine mathematical determinism with empirical evidence. Finally, when they "enter the field," they face the complexity of a human being in its entirety, for whom, as general practitioners well know, a symptom can be the manifestation not only of a specific disease but also of a more general imbalance. At this point, the physician will no longer be able to apply simply one of those universal laws she learned in the early years. This does not mean abdicating the scientific foundations to return to magical rites or apotropaic formulas but acting to solve the specific problem of the specific patient she is facing, in the light of the science that she has introjected. The informatician, like the doctor, must have his feet firmly planted in science, but his head clearly aimed at making people and society feel better (just as a doctor does).
More specifically, informatics students should be prepared to have good basis in mathematics, algorithmic, semantics, systems, and networks, but then they should be able to solve automation problems regarding data processing (intended in its widest meaning) without making people appendices to IT systems. To support the goals of Digital Humanism, they should merge their "engineering" design capabilities with attention to human-centered needs. They should be educated to develop an appreciation for human and social viewpoints regarding digital systems. They have to tackle the challenges of digital transformation while improving social well-being of people and not only enriching the "owner of steam," who has every right to an adequate remuneration of his capital but not at the price of dehumanizing digital systems end users.
That is why we need to broaden the educational horizon of our study courses, complementing the traditional areas of study with interdisciplinary and multidisciplinary education, coming above all from the humanistic and social areas. Only in such a way it will be possible to recover the holistic vision of technological scenarios that is characteristic of a humanism-based approach, where respect for people and values of a democratic society are the guiding forces.