David Gunkel’s most recent book Person, Thing, Robot: A Moral and Legal Ontology for the 21st Century and Beyond is an impressive intellectual project, which intends to get back to the Things themselves. The Thing with capital T, to which Gunkel intends to get back to is not thing in the current sense of its use which is typically understood in contrast to the person. It is, he (2023, pp. 166–167) writes “…a monstrous excrescence that escapes the conceptual grasp of existing categories.” The most visible manifestation of this conceptual monster that is neither a person nor a thing is the robot. Therefore, in order to understand the status of robots as Things, we must go beyond the deeply rooted and well-established thing/person distinction. This, in turn, brings us to the heart of his project: deconstruction. Going beyond this distinction necessitates deconstructing the existing conceptual order in which “thing” is embedded. Gunkel implements the deconstruction procedure in two steps, one negative and one more positive. First, in the negative phase of his project, he convincingly demonstrates the lack of any compelling rationale for categorizing a robot as a ‘thing,‘ a ‘person,‘ or even a hybrid of both, through an extensive examination of the literature. This paves the way for his positive step, in which he attempts to open up a new horizon, a new conceptual order to understand robots. In this part, his reliance on Levinas’ philosophy is especially noticeable.

One might reasonably wonder if it is really necessary to get involved in this more or less sophisticated intellectual manoeuvre to account for the robot situation. It is necessary because the existing binary order of the concepts, is both restricting and impeding for the debates regarding the moral and legal status of robots. It is restricting because this binary logic not only compels us to categorize everything within two rigidly predefined categories, which obscures the nuances and dynamics of a given situation; but it also unduly elevates one side. In this context, it grants primacy to the ‘person’ as the privileged side with moral and legal significance. More specifically, when it comes to the person/thing dichotomy, we are often blind to the historical contingencies of this distinction. This has led many to mistakenly view it as a universal truth, overlooking its roots in the Western intellectual traditions of Greek philosophy, Roman law, and Christian concepts. Thus, moving beyond this constrained logic allows us to prevent western ethnocentrism and welcome diverse traditions into the discourse.

It is impeding, because it traps the discussions regarding the moral and legal status of robots in a predicament. The formulation of this predicament has evolved through Gunkel’s works to its in my opinion most comprehensive version in this book. Initially, the predicament in The Machine Question was framed in terms of the moral agent/moral patient distinction (Gunkel, 2012). The primary question was whether robots and AI systems could be categorized as moral agents or moral patients. The formulation has changed to is / ought distinction in Robot Rights (Gunkel, 2018). In this formulation, the question is whether a robot or an AI system is or is not morally significant. Or, whether a robot or AI system should or should not have moral significance (regardless of whether it is morally significant or not). Finally, the distinction has found its most general formulation in this book, namely the person / thing distinction. Could or could not robots and AI systems be classified as person or thing? Through an exhaustive review of related literature, Gunkel demonstrates that posing these questions results in a standstill. Neither side can convincingly surmount the arguments of the other, resulting in an impasse in the debates over the moral-legal status of robots. This impasse signifies that no definitive argument exists, either in favor of or against the moral-legal significance of robots and AI systems.

What is the source of this predicament? Could the situation be the result of a failure to consider certain alternatives? Perhaps the contributors lacked the insight to see all the possibilities. Gunkel, on the contrary, doesn’t believe so and carefully demonstrates that a wide range of alternatives have been thoroughly investigated. Some proponents argue that robots and AI systems could be regarded as natural persons, for which individual human is the benchmark. Consequently, they deserve moral-legal recognition. Opponents argue that these entities are merely tools or instrumental objects with no moral or legal significance. Gunkel’s key observation, however, is the shared logical framework that underpins all of these perspectives (Gunkel, 2023, p. 88):

  1. 1.

    Having quality Q is necessary to be a person.

  2. 2.

    Entity E provides evidence of possessing quality Q.

  3. 3.

    Entity E is (or can be considered) a person.

According to the philosophical literature, several attributes have been proposed as candidates for the person making quality Q. Rationality, consciousness, sentience, and the ability to experience both pain and pleasure are examples of these. While this line of argument appears to be compelling on the surface, it has significant flaws. First, there is a determination problem. There is no agreement on which attribute or combination of attributes should be designated as Q, so the quality Q remains undetermined. Second, there is the epistemological problem of detection, which states that we have no way of knowing whether a given entity has the quality Q or not. Because candidates for Q are internal intrinsic characteristics, there is no litmus test that can detect them with certainty. Finally, we face the decision problem. In the absence of a definitive test to detect Q, any judgment about its presence or absence becomes a normative decision and an exercise of power. A look at history reveals how these issues have impacted our moral-legal understanding:

Over time, many “things” that were once regarded as things—women, children, slaves, animals— have come to be recognized as persons and therefore admitted into the community of moral and legal subjects (Gunkel, 2023, pp. 9–10).

At this point, we could possibly think that the concept of natural person is too strong to be used to support or oppose the moral-legal status of robots. As a result, let’s consider a more modest alternative: legal personhood. A natural person is defined by their inherent characteristics, whereas a legal person is defined by societal acceptance. Being a ‘person’ means that you are legally recognized as having specific rights and responsibilities. While humans are the most common example of natural persons because of their internal characteristics, corporations, organizations, and the environment are examples of legal persons because they obtained their status through legal recognition. Thus, rather than looking for some internal, metaphysical, person-creating qualities, we must consider whether granting legal personhood to robots and AI systems “…would likely produce inconsistencies in the overarching purposes of the legal system.” (Gunkel, 2023, p. 114). Numerous opposing and supporting viewpoints have been presented in this context. Some strongly conservative opponents, determined to preserve the current legal system, argue against granting robots and AI systems any legal personhood. Other opponents, on the other hand, advocate a more modest approach, arguing that robots and AI systems should have, or will have in the future, limited legal personhood. Similarly, some strong supporters push for fundamental changes to the legal system, arguing for full legal personhood for robots and autonomous technologies. On the other hand, more moderate supporters propose new responsibility frameworks and advocate for a limited legal personality for these entities. Considering the many views advanced, Gunkel demonstrates that determining the legal personhood of robots:

…is about calculating and comparing the costs and benefits of actual outcomes and social impact…these arguments turn out to be no less speculative and conditional than those that have been advanced for natural personhood. Both sides try to accurately forecast what will happen if or when some form of legal personality is granted or denied to robots or other kind of technological artifacts (Gunkel, 2023, p. 130).

We are once again trapped in a speculative dead end, with neither side capable of defeating the other. Nonetheless, another possibility still exists: robots are both thing and person. Now that we can’t convincingly argue whether robots are person or thing, what if a hybrid of the two could set us free from the predicament? The concept of a thing-person hybrid is not unprecedented in legal history, with slaves being a notable example. Slaves were viewed as things owned by their masters under Roman law. Despite not having legal personhood, they were given the ability to control a portion of their master’s assets, known as ‘peculium’. They were held accountable because they had de facto control over the peculium; yet de jure, the sole legal person recognized was the master. Why not liken robots today to ‘robot slaves’? Much as Roman slaves managed a peculium without legal personhood, robots might handle digital peculium for owners without being a legal person. Gunkel highlights two significant issues with this analogy. Firstly, in the Roman legal framework, when a slave committed a crime, it was the slave who faced punishment. How then would one penalize a robot? Secondly, using the term ‘slave’ cannot be divorced from its deep-rooted historical connotations, posing the risk of reproducing a slave-master pattern in contemporary human society. Other hybrid proposals are not significantly different from the robot slavery analogy and thus face the same difficulties. These hybrid, third-term entities “…are potentially worse than the problems they were designed to address.” (Gunkel, 2023, p. 160).

At this point, we feel completely trapped in an apparently unsolvable situation. We looked into every possible option, but none of them were convincing enough. As a result, the only solution for getting out of this dead end is to break down the wall that stands in our way! The positive step of the deconstruction begins at this point, with the main goal of demonstrating how we can destabilize the person / thing distinction. The fundamental dichotomy around which entire debates have evolved. In line with recent developments in object-oriented philosophy and ontology, Gunkel identifies that the distinction arises from the act of turning things into objects. He believes that the temptation to change things to objects stems from one of Heidegger’s most important philosophical moments: the hammerian moment (Heidegger, 1996).

For Heidegger, a thing, like a hammer, can be understood in one of two ways: ready-to-hand or present-at-hand. In the ready-to-hand relation, one interacts with the hammer in a pragmatic-embodied way, for instance, when using it to drive a nail. Thus, we experience the hammer not as something absolute, raw and naked but as Gunkel (2023, p. 25) writes “… in terms of how [it is] objectified and put to work or used by us as equipment for living.” This understanding paves the way for instrumentalism, a philosophy in which things are viewed merely as tools or means to an end. These instrumental objects derive their value in relation to the subject, making them fundamentally different from subjects or persons. On the other hand, in the present-at-hand relation, the engaged-embodied relation with the thing is disrupted. The thing, instead, becomes a subject of cognitive or linguistic scrutiny. Turning things into linguistic objects, rather than being a neutral way of describing them, is a violent intrusion in which “… a thing is already objectivized and turned into an object standing opposite and available to a subject (Gunkel, 2023, p. 165).” Once again, the thing has been objectified and thus transformed into something entirely opposed to the subject or person.

Now, we can get back to the Things in themselves. A Thing, when not objectified or viewed in contrast to a subject—be it natural or legal—breaks down the usual thing/person binary. By encompassing both categories, it transcends this traditional division, providing a wholistic perspective that destabilize our typical classifications. At this juncture, Gunkel draws extensively on Levinasian ethics to illuminate the implications of this deconstruction for the moral status of robots and AI systems (Levinas, 1979). Following this Levinasian line, Gunkel proposes a “thinking otherwise” approach wherein ethics precedes ontology. To put it another way, when we encounter a Thing, its intrinsic nature doesn’t determine how it should be treated. Conversely, it is our treatment of Things that determines what they are. In the Levinasian terms:

…the moral and legal situation of Things does not depend on what they are in their essence but on how they stand in relationship to us and how we decide, in the face of the Other (to use Levinasian terminology), to respond (Gunkel, 2023, p. 172).

Here, there’s a discernible shift towards relationalism where relationships take precedence over the entities involved. In this framework, simply being exposed to the face of the ‘other’ compels us to assume responsibility and respond to the Thing in question. Now we’ve moved backed to the Things, and Gunkel’s intellectual endeavor is complete.

Gunkel’s latest book is essential for anyone interested in the realm of Human-Robot relations or Social Robotics, especially because the standard approach in the field proceeds on certain pre-established distinctions. Practitioners, whether they are designers, engineers, or managers, base their actions and thoughts on these concepts. Similarly, policy-makers and ethicists, when formulating ethical guidelines, make decisions rooted in these distinctions. Even though Gunkel consistently points out that such an approach may result in a moral framework without clear rules, I believe reading this book will challenge the dominance of these established concepts. It will motivate professionals in the field to think outside the box and will also possibly influence them to take a new and innovative approach. The book has significant implications for phenomenological philosophy of technology as well. It introduces new resources to the field, specifically Levinasian phenomenology, which has been overlooked in the mainstream.

As this review indicates, the negative step has taken up the majority of the book, possibly more than 70%. It is completely understandable because the distinction is so deeply embedded in our minds that it takes a strong effort to break it down and make room for new ways of thinking. However, I also believe that the book’s positive aspects should be developed further.

First, Gunkel’s strong focus on the idea that “how the Thing is treated” unilaterally, determines “what it is” might leave him open to the “morality before morality” critique he once aimed at his rivels (2012, p. 175). One might object that although we decide to respond to other’s face, however, the act of making that decision is already rooted in ethical beliefs, carrying its own set of underlying assumptions and outcomes. Second and more importantly, despite Gunkel’s strong opposition, there is a risk of inadvertently shifting from one type of centrism to another. The shift could be from person centrism (anthropocentrism) to relation centrism. The same aggressive interference that linguistic constructs impose on objects, which Gunkel strives to counter, might manifest anew as “relation violence.“ This is crucial because if the same patterns of dominance and suppression recur, even in new forms, the overarching goal of the book may be undermined. Finally, the concept of ‘relation’ remains largely unexplored in Gunkel’s argument. It feels like a mystery, a “black box,“ with only a faint outline suggesting its social nature. This lack of clarity provides ground for further critique. One could argue that even if we agree with Gunkel’s relational perspective, our tendency to treat things as objects, to objectify them, might itself stem from our relational interactions with them. To rephrase, someone who supports the person/thing distinction might say that even within this relational framework, the binary distinction can still be re-produced as an outcome of these relational encounters.

I believe that Gunkel’s project has the potential to be further developed to avoid these challenges. In order to do this, we have to open the relation’s black box, and, to accomplish this, I propose employing a phenomenologically inspired enactive approach to intersubjectivity. This approach is not only totally consistent with Gunkel’s phenomenological proposal, but it also opens up new horizon for developing the positive step of the proposal. As a result, rather than a one-way path from “how a Thing is treated” to “what it is” we have a bidirectional path between them. Relation, emerges from the mutual interactions of various participants and, once formed, actively shapes those participants in return. In essence, it is both the product and the producer; it’s simultaneously structured by interactions and, in return, structures those involved (De Jaegher & Di Paolo, 2007). This co-constitution liberates Gunkel’s framework from relation centrism, thereby allowing Things’ identities, materiality, or affordances to play their respective roles in this dynamic.