1 Introduction

Digital humanism offers a new ethics for the age of artificial intelligence. It opposes what can somewhat simplistically be called “Silicon Valley ideology.”Footnote 1 This ideology is related to the original American, Puritan hope of salvation, of creating a world of the pure and righteous who have left filth and sin behind; it is, in times of digital transformation, characterized by the dream of a perfectly constructed digital counterparts whose construction excludes any error leading us into a technological utopia. The key concept here is that of artificial intelligence, charged with implicit metaphysics and theology, a self-improving, hyper-rational, increasingly ensouled system whose creator, however, is not God but software engineers who see themselves not merely as part of an industry but of an overarching movement realizing a digital paradise on earth based on transparency, all-connectedness, and non-ambiguity.

Like all technologies of the past, digital technologies are ambivalent. Digital transformation will not automatically humanize our living conditions—it depends on how we use and develop this technology. Digital humanism argues for an instrumental attitude toward digitalization: what can be economically, socially, and culturally beneficial, and where do potential dangers lurk? It considers the process of digital transformation as something to be interpreted and actively shaped by us in accordance with the core concepts of humanism. But what are the core concepts of humanism?

Humanism is understood to mean many different things: from the cultivation of ancient languages to the biblical mandate to mankind to “subdue the earth.”Footnote 2 When we speak of humanism here, it is not in the sense of a historical epoch, such as that of Italian early humanism (Petrarch), German humanism in the fifteenth and sixteenth centuries (Erasmus), and finally New humanism in the nineteenth century (Humboldt). Nor is it a specifically Western or European cultural phenomenon, for humanistic thought and practice exist in other cultures as well. We understand by humanism a certain idea of what actually constitutes being human, combined with a practice that corresponds to this humanistic ideal as much as possible. One does not need an elaborated humanistic philosophy to realize a humanistic practice.

At the heart of humanist philosophy and practice is the idea of human authorship. Human beings are authors of their lives; as such, they bear responsibility and are free. Freedom and responsibility are two mutually dependent aspects of human authorship. Authorship, in turn, is linked to the ability to reason. The criminal law criteria for culpability converge with the lifeworld practice of moral attributions. Persons are morally responsible as authors of their lives, as accountable agents and judges.Footnote 3 This triad of reason, freedom, and responsibility spans a cluster of normative concepts that determines the humanistic understanding of the human condition and, in a protracted cultural process, has shaped both lifeworld morality and the legal order over centuries. This normative conceptuality is grouped around the phenomenon of being affected by reasons.

The core idea of humanist philosophy, human authorship, thus, can be characterized by the way we attribute responsibility to each other and thereby treat each other as rational and free beings. In order to better understand this humanist practice, we will now take a closer look at the conceptual connection between responsibility, freedom, and reason.Footnote 4

2 The Humanist Practice of Attributing Responsibility and the Conceptual Connection Between Responsibility, Freedom, and Reason

The concept of responsibilityFootnote 5 is not a concept to be considered in isolation, but it is closely related to the concepts of freedom and reason and, as we will see, also to the concept of action.Footnote 6 In order to clarify which conditions have to be fulfilled in order to attribute responsibility, these terms shall first be explained in more detail.

There is much to suggest that an action is reasonable/rational if and only if there are, all things considered, good reasons to perform that action;Footnote 7 for sentences like “It is reasonable/rational to perform the action h, but, all things considered, there are good reasons against doing h” or “It is unreasonable/irrational to perform action h, but, all things considered, there are good reasons for doing h,” respectively, are already extremely irritating from a purely linguistic point of view. Reason/rationality can be characterized as the ability to appropriately weigh the reasons that guide our actions, beliefs, and attitudes.Footnote 8 Freedom is then the possibility to follow just the reasons that are found to be better in such a deliberation process; thus, if I am free, it is my reasons determined by deliberation that guide me to judge and act this way or that.Footnote 9

But what does it mean to be a reason for doing something? What are examples of reasons?Footnote 10

If an accident victim is lying on the side of the road, seriously injured and without help, then you have a reason to help her (e.g., by giving first aid or calling an ambulance). Or if Peter promises John that he will help him move next weekend, then Peter has a reason to do so. There may be circumstances that speak against it; but these circumstances, too, are reasons, but just more important reasons, such as the reason that Peters’ mother needs his help on the weekend because she is seriously ill. But having made a promise is—at least as a rule—a reason to act in accordance with the promise.Footnote 11 The two examples clearly show two essential characteristics of reasons. Firstly, reasons are normative; for if there is a reason for an action, then one should perform this action, unless there are more weighty reasons that speak against it.Footnote 12 And secondly, they are objective; by this is meant here that the statement that something is a good reason cannot be translated into statements about mental states. For example, Peter has still the reason to help John with the promised move even if he no longer feels like doing so; and the reason to help the victim of the accident does not disappear either just because one has other preferences or because, for example, one is of the crude conviction that the accident victim does not deserve help. There are just as few “subjective reasons” as there are “subjective facts”!Footnote 13

How is this understanding of reason and freedom relevant for the way we attribute responsibility? Responsibility presupposes both at the level of action and at the level of will or decision at least the freedom to refrain from the action in question and from the decision on which it is based.Footnote 14 The so-called semi-compatibilism disputes this and, in contrast, argues that responsibility is possible even without freedom. This position can be traced back to two essays by the American philosopher Harry G. Frankfurt, published in the late 1960s and early 1970s, which continue to shape the debate today.Footnote 15 The Frankfurt-type examples, which were developed following the scenarios cited by Frankfurt there, are intended to show that a person is morally responsible for her decision even if she had no other option in fact than to decide as she did. In these thought experiments, another person, the experimenter—e.g., a neurosurgeon who can follow and influence the development of the subject’s intentions, which are reflected in corresponding readiness potentials, by means of a special computer device—ensures that the decision can only be made and implemented in the sense of an alternative (to do or not to do) determined by her (the experimenter) in advance. If the subject then decides in favor of this alternative, then she is responsible for this decision, although no other decision alternative was open to her at all because of the other person’s possibility of intervention; since the subject would have decided in exactly the same way in the case of freedom of choice (i.e., without the possibility of intervention from the outside), the lack of possibility to decide differently is irrelevant for the question of responsibility from a semi-compatibilist perspective. This shows, according to this view, that responsibility requires neither freedom of action nor freedom of will. However, this argumentation overlooks the fact that in the scenario just described, we only attribute responsibility to the subject because she has chosen one of two alternatives, both of which were open to her (to do or not to do something), and, thus, had freedom of choice. It is obviously decisive for the question of responsibility at what point the neurosurgeon intervenes: If the intervention only takes place after the subject has made a decision, then she had freedom of choice between two alternatives and is therefore responsible. In contrast, if it takes place at a time when the subject of the test is still in a deliberation process, and, thus, before she has made a decision, then she is not responsible, because the final decision was not made by her but is based on a manipulation by the neurosurgeon.Footnote 16 Thus, the Frankfurt-type examples do not disprove that freedom is a prerequisite for responsibility.

But are our decisions and actions really free? Actions differ from mere behavior in several ways. If, during a bus ride, the passenger P1 loses her balance as a result of emergency braking in such a way that she falls on passenger P2 and the latter is injured as a result, this is described and evaluated differently than if P1 drops on P2 and P2 suffers the same kind of injury. It’s only in the second case that we attribute intentions to P1 and that we would call her role in the incident an action. In the first case, on the other hand, we would say it was an unintentional, involuntary behavior not at all guided by her intentions. Actions, obviously, have besides a purely spatio-temporal behavioral component the characteristic of intentionality.Footnote 17 Another property of actions is that they are reason-guided, i.e., that the acting person always has a reason or reasons for his action;Footnote 18 actions are constituted by reasons, not necessarily by good reasons, but they are performed without any reasons. And it is because of their being constituted by reasons that actions always have an element of rationality, at least in the sense that one can always judge—unlike in the case of mere behavior, where this question does not arise at all—whether an action is rational or not; actions are, one could say, “capable of rationality”; for, as we have seen, an action is rational if and only if, all things considered, good reasons speak for it and irrational if and only if, all things considered, good reasons speak against it. The reasons we are guided by are the result of a (sometimes very short) deliberation process, in which the different reasons are weighed up against each other and which, when it is completed (and only then!), leads to a decision which is then realized by an action. In short, therefore, we can say: “No action without decision.”Footnote 19 The respective decision is necessarily free in the sense that it is conceptually impossible that it is already fixed before the conclusion of the decision process, because it is simply part of the nature of decisions that before the decision was made, there was actually something to decide. A decision whose content is already determined before it is made is just not a decision!Footnote 20

It is due to this ability to weigh up reasons, i.e., the ability to deliberate, that we are rational beings and that we are responsible for what we do.Footnote 21 This is obvious if one realizes that one can be reproached for an action but not for mere behavior: If a damage is caused by a person’s mere behavior, she is not reproached for it, and we are satisfied with a purely causal description (in the above example: “Due to the forces acting on her as a result of full braking forces, P1 fell on P2, causing injury to P2”); if, however, the damage was brought about by an action, we expect an explanation and, if possible, a justification, and that means reasons that justify this action. But one can and must only justify oneself for something for which one can also be held responsible. This leads us to the more general formulation and also to the central statement of the concept of responsibility presented here: To be responsible for something is connected to the fact that I am (can be), in principle, affected by reasons;Footnote 22 this suggests a connection between ascribing responsibility and the ability to be affected by reasons which in turn extends the concept of responsibility beyond the realm of action to that of judgment and emotive attitudes.Footnote 23 The conceptual connection between responsibility, freedom, and reason can be formulated against this background as follows: Because or insofar as we are rational, i.e., we have the capacity for deliberation, we are, by exercising this capacity to deliberate, free, and only because and to the extent that we are free, we can be responsible.

From the finding that our practice of attributing responsibility presupposes a certain understanding of freedom, it does, of course, not yet follow that we actually have this kind of freedom. It should be noted, however, that at least the argument that the assumption of human freedom has been refuted by the theory of physical determinism and a universally valid causal principle is not tenable. The concept of comprehensive causal explanation, according to which everything that happens has a cause and can be described as a cause-effect connection determined by laws of nature, has long been abandoned in modern physics; and even classical Newtonian physics is by no means deterministic because of the singularities occurring in it. This is especially true for modern irreducibly probabilistic physics and even more so for the disciplines of biology and neurophysiology, which deal with even more complex systems.Footnote 24

In the introduction, we characterized digital humanism as an ethics for the digital age that interprets and shapes the process of digital transformation in accordance with the core concepts of humanist philosophy and practice. Having identified these core concepts, we can now consider the theoretical and practical implications of digital humanism.

3 Conclusions

3.1 Theoretical Implications of Digital Humanism

3.1.1 Rejection of Mechanistic Paradigm: Humans Are Not Machines

Perhaps the greatest current challenge to the humanistic view of man is the digitally renewed machine paradigm of man. Man as a machine is an old metaphor whose origins go back to the early modern era. The mechanism and materialism of the rationalist age makes the world appear as clockwork and man as a cog in the wheel. The great watchmaker is then the creator who has ensured that nothing is left to chance and that one cog meshes with another. There is no room for human freedom, responsibility, and reason in this image.

Software systems have two levels of description, that of the hardware, which must fall back only on physical and technical terms, and that of the software, which can be divided again into a syntactic and a semantic one. The description and explanation of software systems in terms of hardware properties is closed: Every operation (event, process, state) can be uniquely described as causally determined by the preceding state of the hardware. In this characterization, posterior uniqueness of hardware states would suffice; Turing then added prior uniqueness to this, so that what is called a “Turing machine” describes a process uniquely determined in both temporal directions. Transferred as a model to humans, this means that the physical-physiological “hardware” generates mental characteristics like an algorithmic system with a temporal sequence of states clearly determined by genetics, epigenetics, and sensory stimuli and thus enables meaningful speech and action. The humanistic conception of man and thus the normative foundations of morality and law would prove to be pure illusion or a collective human self-deception.Footnote 25

In a humanistic worldview, however, a human being is not a mechanism, but a free (autonomous) and responsible agent in interaction with other human beings and a shared social and natural world. For it is undeniable for us humans that we have mental properties, that we have certain mental states, that we have beliefs, desires, intentions, fears, expectations, etc.

3.1.2 Rejection of the Animistic Paradigm: Machines Are Not (Like) Humans

Even in the first wave of digitalization after the Second World War, interestingly enough, it was not the materialistic paradigm just described but the animistic paradigm that proved to be more effective. In 1950, Alan Turing made a contribution to this in his essay “Computing Machinery and Intelligence”Footnote 26 that is still much discussed today. The paradigm we call “animistic” goes, so to speak, the opposite way of interpretation: Instead of interpreting the human mind (mental states) as an epiphenomenon of material processes in a physically closed world, and describing it mechanistically, the algorithmic system is now endowed with mental properties, provided it sufficiently (i.e., confusably) resembles that of humans in its external (output) behavior. One can find this animistic view in an especially radical conception of “strong AI,” according to which there is no categorical difference between computer processes and human thought processes such that software systems have consciousness, make decisions, and pursue goals and their performances are not merely simulations of human abilities but realize them.Footnote 27 From this perspective, “strong AI” is a program of disillusionment: What appears to us to be a characteristically human property is nothing but that which can be realized as a computer program. The concept of “weak AI,” on the other hand, does not deny that there are categorical differences between human and artificial intelligence, but it assumes that in principle all human thinking, perception, and decision-making processes can be simulated by suitable software systems. Thus, the difference between “strong AI” and “weak AI” is the difference between identification and simulation.

If the radical concept of “strong AI” were about to be realized, we should immediately stop its realization! For if this kind of “strong AI” already existed, we would have to radically change our attitude toward artificial intelligence: we would have to treat strong AI machines not as machines but as persons, that is, as beings who have human rights and human dignity. To switch off a strong AI machine would then be as bad as manslaughter.

It is a plausible assumption that computers as technical systems can be described completely in a terminology that has only physical terms (including their technical implementation). There is then no remainder. A computer consists of very complex interconnections in high numbers, and even if it would go beyond all capacities available to humans, it is in principle possible to describe all their interconnections completely in their physical and technical aspects. If we exclude the new product line of quantum computers, classical physics extended by electrostatics and electrodynamics is sufficient to completely describe and explain every event, every procedure, every process, and every state of a computer or a networked software system.

Perhaps the most fundamental argument against physicalism is called the “qualia argument.” This argument speaks against the identity of neurophysiological and mental statesFootnote 28 and, since, as we have just seen, every state of a computer or a networked software system can be completely described in physical terms, also against the identity of digital and mental states. The Australian philosopher Frank Cameron Jackson put forward one version of the qualia argument in his essay “What Mary didn’t know” (1986), in which he describes a thought experiment which can be summarized as follows:

Mary is a scientist, and her specialist subject is color. She knows everything there is to know about it, the wavelengths, the neurological effects, every possible property color can have. But she lives in a black and white room. She was born there and raised there and she can observe the outside world on a black and white monitor. One day, someone opens the door, and Mary walks out. And she sees a blue sky. And at that moment, she learns something that all her studies couldn’t tell her. She learns what it feels like to see color.

Now imagine an AI that not only has, like Mary, all available information about colors but also all available information about the world as well as about people and their feelings. Even if there were an AI that had all this information, it would not mean that it understands what it means to experience the world and to have feelings.

Software systems do not feel, think, and decide; humans on the contrary do, as they are not determined by mechanical processes. Thanks to their capacity for insight as well as their ability to have feelings, they can determine their actions themselves, and they do this by deciding to act in this way and not in another. Humans have reasons for what they do and can, as rational beings, distinguish good from bad reasons. By engaging in theoretical and practical reasoning, we influence our mental states, our thinking, feeling, and acting, thereby exerting a causal effect on the biological and physical world. If the world were to be understood reductionistically, all higher phenomena from biology to psychology to logic and ethics would be determined by physical laws: Human decisions and beliefs would be causally irrelevant in such a world.Footnote 29

3.2 Practical Implications of Digital Humanism

The finding that even complex AI systems cannot be regarded as persons for the foreseeable future gives rise to two interrelated practical demands in particular.

First, we should not attribute responsibility to them. As we have already seen, it is quite plausible that AI systems are not rational and free in the way that is necessary for attributing responsibility to them. The reason why they lack this kind of rationality and freedom is that they lack the relevant autonomy, which consists in the ability of the agent to set her own goals and to direct her actions with regard to these goals. These goals do not simply correspond to desires or inclinations, but are the result of a decision-making process. We can distinguish this concept of Strong Autonomy from the concept of Weak Autonomy,Footnote 30 in which concrete behavior is not determined by the intervention of an external agent, but an external agent determines the overriding goal to be pursued. Since Weak Autonomy does not manifest itself in the choice of self-imposed (overriding) goals, but at best in the choice of the appropriate means by which externally set goals can be achieved, one could also speak of “heteronomous autonomy.” To the extent that an AI has the ability to select the most suitable behavioral alternative for achieving a given goal, this could be interpreted as Weak Autonomy.

The second demand is that ethical decisions must never be made by algorithmically functioning AI systems. For apart from the fact that algorithms do not “decide” anything,Footnote 31 the consequentialistically orientated optimization function inherent in algorithms is not compatible with human dignity and, more generally, with the deontological framework of liberal constitutions.Footnote 32 Furthermore, the approach of considering all relevant facts for each case in advance when programming an algorithm does principally not take into account the complexity and context sensitivity of ethical decision-making situations.Footnote 33 AI systems have no feelings, no moral sense, and no intentions, and they cannot attribute these to other persons. Without these abilities, however, proper moral practice is not possible.

Discussion Questions for Students and Their Teachers

  1. 1.

    How is digital humanism characterized in this chapter?

  2. 2.

    What are the core concepts of humanist philosophy and practice?

  3. 3.

    In what way do actions differ from mere behavior?

  4. 4.

    What conditions must be met for us to hold someone personally responsible for something?

  5. 5.

    What are the main theoretical and practical implications of digital humanism?

Learning Resources for Students

  1. 1.

    Nida-Rümelin, J. and Weidenfeld, N. (2022) Digital Humanism. Cham: Springer International Publishing (https://link.springer.com/book/10.1007/978-3-031-12482-2).

    This book describes the philosophical and cultural aspects of digital humanism and can be understood as its groundwork.

  2. 2.

    Nida-Rümelin, J. (2022), Digital Humanism and the Limits of Artificial Intelligence. Perspectives on Digital Humanism. Cham Springer International Publishing, pp. 71-75. (https://link.springer.com/book/10.1007/978-3-030-86144-5).

    This article presents two important arguments against the animistic paradigm: the “Chinese Room” Argument against the conception of “strong AI” and, based on the meta-mathematical results of incompleteness and undecidability of Kurt Gödel and other logicians, an argument against the concept of “weak AI.”

  3. 3.

    Bertolini, A. (2014), “Robots and Liability – Justifying a Change in Perspective” in Battaglia, F. et al. (ed.), Rethinking Responsibility in Science and Technology, Pisa: Pisa University Press srl, pp. 203–214.

    This article presents good arguments against the liability of robots.

  4. 4.

    Nida-Rümelin, J. (2014) “On the Concept of responsibility” in Battaglia, F. et al. (ed.), Rethinking Responsibility in Science and Technology, Pisa: Pisa University Press srl, pp. 13–24.

    This article, in the same anthology, focuses on our responsibility for our actions, convictions, and emotions and the reasons we have for all of them. The whole anthology is worth reading!

  5. 5.

    Bringsjord, Selmer and Naveen Sundar Govindarajulu, “Artificial Intelligence”, The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/fall2022/entries/artificial-intelligence/>.

    Very instructive article about what AI is as well as about its history and its different philosophical concepts.