Skip to main content
Log in

Debate: what is personhood in the age of AI?

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, we hope to open new views upon urgent and much-discussed questions that, quite soon, may confront us in our daily lives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It should be noted that this way of characterizing “natural person” is not “natural” but culturally specific. It has a distinctly European and Christian pedigree. The concept of “person” is individuated and specified according to the qualifying capability or a set of faculties that (it is assumed) naturally belong to an individual entity. This is the case beginning (at least) in the sixth century with Boethius’s (1973, chap. 3) definition, persona est naturae rationalis individua substantia (“a person is an individual substance of a rational nature”); continuing through John Locke’s characterization, “a thinking intelligent being that has reason and reflection and can consider itself as itself” (Locke 1998, sec. 2.27.9); and beyond (e.g. Strawson 1959; Taylor 1985). This is, however, not necessarily the correct or only way to formulate and define “person.” In a number of African traditions, like Ubuntu, person is not the natural condition of an individual human being, it is an achieved social condition. Instead of operationalizing the individuated “cogito ergo sum” of Descartes, this way of thinking proceeds from the adage: “I am because we are, and since we are, therefore I am” (Mbiti 1990, pp. 108–109). In these traditions (and it should be noted that this is not one univocal tradition but a constellation of different but related traditions), personhood is not something naturally belonging to an individual, but “something which has to be achieved, and it is not given simply because one is born of human seed” (Menkiti 1984, p. 172). Consideration of these important cultural differences in the way that “natural person” has been defined and characterized was not included as part of the prepared remarks during the debate. They did, however, figure prominently during the question and answer period. For more on this subject, see the collection of essays Ubuntu and Personhood, edited by James Ogude (2018).

  2. The proposal, which was not adopted as originally written, immediately generated controversy as evidenced by 250 scientists, engineers and AI professionals who signed an open letter to the European Commission (2016) opposing the proposals and asserting that robots and AI, no matter how autonomous or intelligent they might appear to be, are nothing more than tools.

  3. Some meanings of “person” (e.g. conferred legal and social statuses) refer first to how the object of the attribution will be treated, whereas natural personhood purposes to refer first to what the object is, independently of treatment. Oftentimes, natural personhood is seen as possessing an intrinsic “moral personhood” that demands a further attributed status or role within society—as in the rejection of slavery on the basis of human persons’ moral status. For a recent survey and argument, see Gordon (2020a, b), who would ground an individual’s intrinsic moral personhood in that individual’s functional capacities. I agree that moral personhood ought to be considered as recognized rather than conferred, with recognition motivated intrinsically by moral worth rather than instrumentally by the anticipated consequences of (non-)recognition. However, on the ground of this worth, I follow Robert Spaemann (2006): the individual’s personhood (natural and, in certain measures, moral) is intrinsic not to that individual’s capacities but to that individual’s membership in a kind, the mature members of which ordinarily have these capacities. This position accommodates the moral worth of both the very young and the severely disabled without reducing personhood to the merely biological category of humanity. On another note, while appreciating that many are wary of an all-sufficient “essentialism” because of its reductive possibilities, anti-metaphysical models have their own inadequacies. I would resist a definition of personhood as a solely a social achievement because, as Dr. Gunkel acknowledges, societies find it easy to deny personhood to certain groups. That being said, the concept of Ubuntu has affinities with the relationality wherein, by my definitions, personhood is exercised most fully.

  4. Spaemann writes that “ancient applications of the word [person]”, “though they refer to human beings,” see these beings “not as instances of a kind or examples of a general concept, but as bearers of a social role (in the widest sense) or as occupants of a legal status. Behind this role and presupposed by it, there stands the bearer of the role,” not a subject who may or may not live in accord with his or her nature, but simply “the human nature itself.” In Stoicism, this nature itself is a role, but behind this still “there seems to be no subject at all” but only destiny (2006, p. 23).

  5. On Old Testament views of God’s self-revelation see e.g. Preuss (1995, pp. 194–195). For New Testament developments, see e.g. Kittel (1964). Athanasius of Alexandria distinguishes between how God is revealed in time as Creator and how God eternally exists as Father and Son and Spirit (Anatolios 2018, pp. 129–131). In the late fourth century, Augustine’s De Trinitate (2012b, sec. 4.5.25) refines this view to its decisive Western form. By the appearances of the persons in history, God reveals his inner life so that it might become humans’ destination (Hill 1991, paras. 89–90).

  6. That is to say, God’s tri-personal relationality, self-complete in the divine life, needs no external relation to live in a fully personal manner. Unlike the divine persons of the single God, human persons exist separately and so the full flourishing of their relational handing-over is accomplished by one person’s interiority freely going forth toward another in a relationship by which the other’s outward self-expression is also received into one’s own interior and understood as an expression of the other’s interior. On this view, our personhood is not fully expressed when abstracted from personal inter-relations, accomplished between persons, in community. (Even the Christian eremitical tradition has community with God, rather than mere solitude, as its orientation).

  7. Narrower reckonings of personhood risk both being arbitrary and neglecting that in which apparently personal AIs would appear personal. Some invoke “rationality” without consciousness as sufficient for “mind” or “intelligence,” with speculative rationality as the capacity for logical syllogizing or calculation (Newell and Simon 1976; cf. Hobbes), and practical rationality as the capacity to accomplish goals in the best possible way (Russell and Norvig 2009, p. 2). However useful to AI advances, these reductions of rationality are inadequate to our discussion. First, a speculative “computationalism” holds that the simulation of ratiocination just is thought because thought is and only is a certain kind of calculation (Kim 2010, pp. 160–161). Irrespective of consciousness, then, AIs that implemented this kind could be said to think and (depending on what else might make up a person) could even be called persons. Yet this assumed separability of reasoning (let alone personhood) from consciousness is non-obvious. In much late antique, medieval (Van Nieuwenhove 2017), and Enlightenment (Kant) thought on the person, logical reasoning (ratio) is meaningful as such by the inseparably co-penetrating light of conscious apprehension (intellectus). This should give us pause. Second, for defining the person, practical rationality without consciousness ignores the dimension of personhood—self-aware subjectivity, a personal inner life—which we will feel ourselves to be meeting when we encounter a persuasive social AI. Culturally, we have found this viscerally horrifying (e.g. the Stepford wives). (Intriguingly, some propose that autistic persons varyingly experience those around them as goal-directed but without comprehensible interiority (Hamilton 2009).) Definitions of “person” without subjectivity and inter-subjectivity are blind to these distinctions. Beyond my scope, on “functionalism” see Piccinini (2010); Levin (2018).

  8. I use the term “empathy” to designate the act of taking another’s thoughts and emotions into oneself, to know that person’s position—but I do not see this as meaning that we must remain confined to the horizon of that other's own assessment of this position. My use accords with what the Latin patristic and medieval traditions call compassio, whence our term “compassion.” To embrace rather than to be embraced by the horizon of the other person’s point of view is a refinement not always present in contemporary understandings of “empathy” (Lanzoni 2015; Stueber 2019), whence Bloom (2016) advocates a “compassion” of concern and outward acts by which one seeks to alleviate another’s ills, but without shared emotional or cognitive experience. I use “empathic compassion” to capture the richer compassio.

  9. There will be not anything that it is “like” to be them along the lines of Nagel’s (1974) question “what is it like to be a bat?” (I am intentionally loose in my terminology, not wishing to commit to a particular model or definition).

  10. While not decisive, this analogy is the basis for the Cambridge Declaration on Consciousness (Low 2012): “[T]he weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.” Here, consciousness is not a thing alongside the living bodily organism, but a property of that living organism.

  11. To simulate a brain is at present impossible. We lack a map of the “connectome,” i.e. of each and every neuron’s connections. Moreover, the complex interactions of interconnected neurons are incompletely understood (Bentley et al. 2016; Schafer 2018), as also are the roles of environmental and proprioceptive feedback—i.e. the influence of embodied situatedness (Jabr 2012). Even were these worked out, the why and what of the network’s interior activity could still be uninterpretable to us, despite its intelligible outward behavior; cf. Pearl (2019).

  12. The question of artifactual consciousness in whatever degree is contentious. Near-future AIs based on contemporary techniques will be apparent but unreal persons because their interior lives will be behaviorally hinted but subjectively unreal. My comments on the neural network align somewhat with Block (1978) and with Searle’s (1980) “Chinese Room.” On whether phenomenal consciousness would be metaphysically possible in any future artifact, here are some doubts: If any sort of non-biological machine can have true phenomenal consciousness (and not just a behavioral or functional simulation thereof), then consciousness is not limited to the physical processes (i.e. embodied nervous systems) that produce the conscious experience and subjective self-awareness from which humans (at least) engage in the social relations I term “personal.” Options then abound. Some, deeming it impossible to account for consciousness by physics or indeed any human inquiry, posit that consciousness, like mass, is a fundamental property of matter, such that “the basic physical constituents of the universe have mental properties, whether or not they are living organisms” (Nagel 1978, p. 181). Others argue that, by some deep laws, not the matter of neurons but the functional or information-processing properties of a system give rise to conscious subjectivity (Chalmers 2010, pp. 26–27; Tegmark 2017, p. 304). This would preserve, even prioritize, the relevance of consciousness as an intrinsic property of certain patterns of information processing—but it begs the question of what counts as “information” and its processing. Chalmers (2011) proposes that not just any physical change would count, but only causal physical relationships of a particular sort, the sort that formally parallels the causal state transitions that accomplish information processing in the brain. This would escape Searle’s famous jibe that the vibrating molecules of his wall could be said to compute a word processing program, under a certain (selective) mapping (Searle 1992, pp. 208–209; Harnad 1994). Even under Chalmers’ constraints, one might question whether “information” and “computation” can ever be said to be observer-independent properties of a natural or engineered system. Paul Schweizer (2019a, b) argues that Chalmers’ definitions do not even apply to all instances of what we would consider to be computation; the only factor common across all instances is an observer’s discernment of computation in them. To reconcile Chalmers and Schweizer easily, we could adopt the broadest possible interpretation, which Chalmers seems willing at least to entertain, that consciousness of some sort exists wherever there is causation, and so “[e]xperience is information from the inside; physics is information from the outside” (Chalmers 1997, pp. 293–310). If consciousness and information processing are intrinsically present only because everywhere present, then Chalmers’ position approaches panpsychism.

  13. Spaemann writes (2006, p. 243): “Let us take the severely disabled first. Are we dealing with a thing? Or with an animal of a different kind? Of course not. We are dealing with a patient.... [A] human being incapable of personal expression... [we see] as a sick human in need of help. We look for ways of helping if we can, for ways of restoring ‘nature’, providing an opportunity to take that place reserved for him or her in the community of persons until death.” Our response to the disabled, he concludes, “is the acid test of our humanity.”

  14. John Danaher’s “ethical behaviorism” (2019, 2020a, b) opposes this position but seems to beg the original question. For Danaher, a robot’s “observable behavioural relations and reactions to us [and the world]” are “sufficient epistemic ground or warrant” for our believing of its relationship with us what we would believe of humans under similar circumstances (2020a). In the case of evaluating whether the friendship-conditions of “mutuality” (true, intentional good-will) and “authenticity” (presenting oneself as one is) are met on the part of friend-behaving agents, we ought—against the claim that robots have no inner life and therefore meet neither of these conditions—apply epistemically the same behavioral standard that we apply to humans and animals (Danaher 2019). Danaher does not wish to assert ontological behaviorism (that friendship just is behavior), only methodological behaviorism (behavior is the ground upon which we assert friendship and its inner states). However, he seems to slip from (1) arguing that the ordinary epistemic standards are met and should be enough for our belief that friendship-conditions are met, to (2) arguing that therefore the friendship conditions are met: “[I]t is (technically) possible for the mutuality and authenticity conditions to be satisfied in our friendships with robots” such that “there is nothing illusory or unreal about robotic friendships” (2019). For, if “[t]here is no inner state that you need to seek to confirm” the intentions and love expressed in human behavior, then you ought not seek such a state for robots but ought to affirm that “simulated feeling can be genuine feeling, not fake or dishonest feeling” (2020b). This is not a conflation if he is simply drawing the epistemically warranted conclusion, but is the case of human beings in fact analogically parallel to that of robots? True, someday robots may satisfy the epistemic conditions based upon which we ordinarily are justified in believing that the mutuality and authenticity conditions have been satisfied in human relationships. But even so, does the exotic case of the apparently personal robot justify our continuing to rely on the naïve epistemic behaviorism that serves us well for fellow humans? The experience of an Imax planetarium satisfies the epistemic conditions based upon which I ordinarily feel myself justified (and am justified) in believing that I am gazing upon the night sky. Only additional knowledge (e.g. “this is a planetarium, not a window”)—not immediately available in the context of the planetarium experience—persuades me that this is not a night sky. Lacking this additional knowledge, a naïve viewer would be justified (although incorrect) in believing that she was observing the night sky. This, however, does not make the simulated night sky to be a genuine night sky, and the answer to whether or not that difference matters ought to hinge on more than whether or not the naïve observer is epistemically justified in her belief. So, when Danaher argues that we are justified in believing that the robot has met the conditions of true friendship, it is true that the robot has met the un-critiqued epistemic conditions for our belief in human friendship-conditions, but not true that, as Danaher would have it, “the mutuality and authenticity conditions [have been] satisfied in our friendships with robots” (2019). The satisfaction of epistemic grounds for believing friendship-conditions to be satisfied is not identical to the satisfaction of those friendship-conditions themselves, any more than footprints appearing without a foot are identical to the invisible man that we assume to have produced them. Danaher’s argument, then, amounts only to a restatement of the problem, and it invites us to interrogate the hidden assumptions that enable us easily (and I think rightly) to assert that these are sufficient grounds in the case of our assessing humans. Perhaps certain ontological assumptions (e.g. an analogy between others’ behavior and my own behavior as rooted in conscious experiences) are baked into our own epistemology—assumptions that, while experientially difficult to shake, might not actually hold in our “relationships” with robots. A potential middle term between our own interior states and the presumed states of those who behave like us is the appearance of identical material conditions—i.e. biology. The robot’s lack of a nervous system gives us reason for our intuition that it might not have the interior states by which it could accomplish mutuality and authenticity. I agree with Danaher that, “while shared biological properties might give us more grounds for believing in our human friends it is not clear that these grounds are necessary or sufficient for believing in [human] friendship” (2019). By analogy with the planetarium, however, a different object of friendship may require more extensive grounds because the different object challenges the general assumptions (biology, experience) that may underlie the assumed reliability of our epistemic assumptions (observation of behavior)—unless Danaher means to make behavioral performance not only the epistemic ground but also the actual object of reference for statements about mutuality. In this case, however, he will have arrived at the very ontological behaviorism that he wishes to avoid.

  15. Despite millennia of domestication, dogs have lives of their own that set limits on our interactions with them. Future AI companions will not have such limits, except insofar as technology cannot supply a solution or as may enhance the experience of the end user.

  16. See discussion and application to robots of this Aristotelian concept in Richardson (2016a, b).

  17. This could be a slow but decisive habituation. As Joanna Bryson (2015) warns, “our behavior can radically change without a shift in either explicit or implicit motivations—with no deliberate decision to refocus.” Bryson worries (2010) that ascribing personhood to robots could sap the human social capital available for relationships with other human beings; humans might even choose the “easier” robots who are beholden to our whims. I build on this concern to ask whether the egocentric tendencies of socialization with robots might distort our expression of that which is most “personal” about us—our capacity for empathic self-gift and interpersonal communion.

  18. On this theme applied to sex robots, see e.g. Harvey (2015) and Richardson (2015) in contrast to proponents such as Levy (2008).

  19. Technological innovations like AI and socially interactive robots complicate the usual way of thinking about and resolving questions regarding moral and legal standing. Efforts to fit these entities into the existing moral and legal categories often strain against the limits of the very concepts that have been deployed, necessitating a kind of “conceptual re-engineering” or what Alexis Burgess and David Plunkett (2013) call “conceptual ethics.” Alexis Dyschkant, for instance, suggests that we might gain some traction in this effort by moving away from binary categorizations and thinking more in terms of a spectrum of differences: “We may benefit from remembering that being capable of having rights and duties is not always a zero sum game, and sometimes more like a spectrum. There are already lots of variations on which sorts of rights some humans have on the basis of their status as a prisoner or as a minor. Some humans have more rights and some have less. It seems plausible that animals could also exist on this spectrum.... While it would be ridiculous to give bonobos the ability to vote, that should not be a barrier to considering a bonobo a person in some respects.” (2015, p. 2108).

  20. Augustine calls this “using” (uti) a thing rather than “enjoying” it (frui) (1996, sec. 1.3.3; 1.33.37). To “enjoy” is to find in the other thing one’s ultimate satisfaction; if a person or possession is “enjoyed,” one reduces that thing to oneself. Augustinian “use” is not to be confused with the egocentric “use” against which Kant warns and that Augustine would class as superbia. Augustinian “use” takes one’s relationship with the person or object up into the higher relation binding all to an origin and destiny in God. This is Augustine’s antidote to superbia.

References

  • Anatolios K (2018) Retrieving Nicaea: the development and meaning of Trinitarian Doctrine. Baker Academic, Grand Rapids

    Google Scholar 

  • Athanasius of Alexandria (1980a) Orations against the Arians, Book III [Selections] [ca. 339–343]. In: Norris RA (ed) The Christological controversy, re-typeset ed. Fortress Press, Philadelphia, pp 65–78

  • Athanasius of Alexandria (1980b) Orations against the Arians, Book I [ca. 339–343]. In: Rusch WC (ed) The Trinitarian controversy, re-typeset ed. Fortress Press, Philadelphia, pp 55–104

  • Augustine of Hippo (1887) The City of God, against the Pagans [413–427]. In: St. Augustine’s City of God and Christian Doctrine. Christian Literature Publishing Co., Buffalo

  • Augustine of Hippo (1996) Teaching Christianity [De doctrina Christiana] [396–426], 1st edn. New City Press, Hyde Park

    Google Scholar 

  • Augustine of Hippo (2004) The literal meaning of genesis [De Genesi ad litteram] [401–415]. On genesis. New City Press, Hyde Park, pp 168–506

    Google Scholar 

  • Augustine of Hippo (2012a) The confessions [397–401], 2nd edn. New City Press, Hyde Park

    Google Scholar 

  • Augustine of Hippo (2012b) The Trinity [399–419], 2nd edn. New City Press, Hyde Park

    Google Scholar 

  • Bauckham R (2008) Jesus and the God of Israel: God crucified and other studies on the New Testament’s Christology of Divine identity. Eerdmans, Grand Rapids

    Google Scholar 

  • Bendel O (2019) The morality menu. https://maschinenethik.net/wp-content/uploads/2019/12/Bendel_MOME_2019.pdf

  • Benford G, Malartre E (2007) Beyond human: living with robots and cyborgs, 1st edn. Forge Books, New York

    Google Scholar 

  • Bentley B, Branicky R, Barnes CL et al (2016) The multilayer connectome of Caenorhabditis elegans. PLOS Comput Biol 12:e1005283. https://doi.org/10.1371/journal.pcbi.1005283

    Article  Google Scholar 

  • Block N (1978) Troubles with functionalism. Minn Stud Philos Sci 9:261–325

    Google Scholar 

  • Bloom P (2016) Against empathy: the case for rational compassion. Ecco, New York

    Google Scholar 

  • Boethius (1973) Contra Eutychen [ca. 513]. In: theological tractates. The consolation of philosophy. Harvard University Press, Cambridge

  • Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins Publishing Company, Philadelphia, pp 63–74

    Chapter  Google Scholar 

  • Bryson JJ (2015) Artificial intelligence and pro-social behaviour. In: Misselhorn C (ed) Collective agency and cooperation in natural and artificial systems: explanation, implementation and simulation. Springer International, Cham, pp 281–306

    Chapter  Google Scholar 

  • Bryson JJ (2018) Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics Inf Technol 20:15–26. https://doi.org/10.1007/s10676-018-9448-6

    Article  Google Scholar 

  • Bryson JJ, Diamantis ME, Grant TD (2017) Of, for, and by the people: the legal lacuna of synthetic persons. Artif Intell Law 25:273–291

    Article  Google Scholar 

  • Buford TO (2019) Personalism. Internet Encyclopaedia of Philosophy. https://iep.utm.edu/personal/

  • Burgess A, Plunkett D (2013) Conceptual ethics I. Philos Compass 8:1091–1101. https://doi.org/10.1111/phc3.12086

    Article  Google Scholar 

  • Carpenter J (2016) Culture and human-robot interaction in militarized spaces. Ashgate, Burlington

    Book  Google Scholar 

  • Chalmers DJ (1997) The conscious mind: in search of a fundamental theory, Revised ed. Oxford University Press, New York

  • Chalmers DJ (2010) The character of consciousness, 1st edn. Oxford University Press, New York

    Book  Google Scholar 

  • Chalmers DJ (2011) A computational foundation for the study of cognition. J Cogn Sci 12:323–357

    Google Scholar 

  • Chopra S, White LF (2011) A legal theory for autonomous artificial agents. University of Michigan Press, Ann Arbor

    Book  Google Scholar 

  • Coeckelbergh M (2018) Why care about robots? Empathy, moral standing, and the language of suffering. Kairos J Philos Sci 20:141–158. https://doi.org/10.2478/kjps-2018-0007

    Article  Google Scholar 

  • Committee on Legal Affairs (2016) Draft report with recommendations to the commission on civil law rules on robotics. European Parliament

  • Danaher J (2019) The philosophical case for robot friendship. J Posthuman Stud 3:5–24. https://doi.org/10.5325/jpoststud.3.1.0005

    Article  Google Scholar 

  • Danaher J (2020a) Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci Eng Ethics 26:2023–2049. https://doi.org/10.1007/s11948-019-00119-x

    Article  Google Scholar 

  • Danaher J (2020b) Robot betrayal: a guide to the ethics of robotic deception. Ethics Inf Technol 22:117–128. https://doi.org/10.1007/s10676-019-09520-3

    Article  Google Scholar 

  • Darling K, Nandy P, Breazeal C (2015) Empathic concern and the effect of stories in human-robot interaction. In: 2015 24th IEEE International Symposium on robot and human interactive communication (RO-MAN). pp 770–775

  • de Hamilton AFC (2009) Goals, intentions and mental states: challenges for theories of autism. J Child Psychol Psychiatry 50:881–892. https://doi.org/10.1111/j.1469-7610.2009.02098.x

    Article  Google Scholar 

  • Dennett DC (1998) Brainstorms: philosophical essays on mind and psychology. MIT Press, Cambridge

    Google Scholar 

  • Derrida J (2005) Paper machine. Trans. Rachel Bowlby, 1st edn. Stanford University Press, Stanford

  • Douglass F (2016) Narrative of the life of Frederick Douglass, an American slave: written by himself [1845], Critical. Yale University Press, New Haven

    Google Scholar 

  • Dyschkant A (2015) Legal personhood: how we are getting it wrong. Univ Ill Law Rev 2015:2075–2110

    Google Scholar 

  • Gelin R (2016) The domestic robot: ethical and technical concerns. In: Ferreira MIA, Sequeira JS, Tokhi MO, et al. (eds) A world with robots (International Conference on Robot Ethics: ICRE 2015). Springer, New York

  • Gill C (1996) Personality in Greek epic, tragedy, and philosophy: the self in dialogue. Oxford University Press, Oxford

    Google Scholar 

  • Gordon J-S (2020a) Artificial moral and legal personhood. AI Soc. https://doi.org/10.1007/s00146-020-01063-2

    Article  Google Scholar 

  • Gordon J-S (2020b) What do we owe to intelligent robots? AI Soc 35:209–223. https://doi.org/10.1007/s00146-018-0844-6

    Article  Google Scholar 

  • Gregory I (1992) Moralia in Iob; Commento Morale a Giobbe 1 (I-VIII) [586–590]. Città Nuova, Rome

    Google Scholar 

  • Gregory I (1997) Moralia in Iob; Commento Morale a Giobbe 3 (XIX-XXVII) [586–590]. Città Nuova, Rome

    Google Scholar 

  • Gross T (2018) How American corporations had a “hidden” civil rights movement. Fresh Air. National Public Radio. https://www.npr.org/2018/03/26/596989664/how-american-corporations-had-a-hidden-civil-rights-movement

  • Gunkel DJ (2018) Robot Rights. The MIT Press, Cambridge

    Book  Google Scholar 

  • Harnad S (1994) Computation is just interpretable symbol manipulation; cognition isn’t. Mind Mach 4:379–390. https://doi.org/10.1007/BF00974165

    Article  Google Scholar 

  • Harvey C (2015) Sex robots and solipsism: towards a culture of empty contact. Philos Contemp World 22:80–93. https://doi.org/10.5840/pcw201522216

    Article  Google Scholar 

  • Heider F, Simmel M (1944) An experimental study of apparent behavior. Am J Psychol 57:243–259. https://doi.org/10.2307/1416950

    Article  Google Scholar 

  • Hill E (1991) Introduction. In: The trinity, 1st edn. New City Press, Hyde Park

  • Hume D (1980) A treatise of human nature [1738–40]. Oxford University Press, New York

    Google Scholar 

  • Hurtado LW (2005) Lord Jesus Christ: devotion to Jesus in earliest Christianity. Eerdmans, Grand Rapids

    Google Scholar 

  • Hurtado LW (2018) Honoring the son: Jesus in earliest Christian devotional practice. Lexham Press, Bellingham

    Google Scholar 

  • Jabr F (2012) The connectome debate: is mapping the mind of a worm worth it? Sci Am. https://www.scientificamerican.com/article/c-elegans-connectome/

  • Kim J (2010) Philosophy of Mind, 3rd edn. Routledge, Boulder, CO

    Google Scholar 

  • Kittel G (1964) δόξα. In: Friedrich G, Kittel G (eds) Theological dictionary of the new testament. Eerdmans, Grand Rapids, pp 233–255

    Google Scholar 

  • Kurki VA (2019) A theory of legal personhood. Oxford University Press, Oxford

    Book  Google Scholar 

  • Kurki VAJ, Pietrzykowski T (eds) (2017) Legal personhood: animals, artificial intelligence and the unborn. Springer International Publishing, Cham, Switzerland

    Google Scholar 

  • Lanzoni S (2015) A short history of empathy. The Atlantic, 15 October. https://www.theatlantic.com/health/archive/2015/10/a-short-history-of-empathy/409912/

  • Leite I, Castellano G, Pereira A et al (2014) Empathic robots for long-term interaction. Int J Soc Robot 6:329–341. https://doi.org/10.1007/s12369-014-0227-1

    Article  Google Scholar 

  • Leite I, Pereira A, Mascarenhas S et al (2013) The influence of empathy in human-robot relations. Int J Hum-Comput Stud 71:250–260. https://doi.org/10.1016/j.ijhcs.2012.09.005

    Article  Google Scholar 

  • Leong B, Selinger E (2019) Robot eyes wide shut: understanding dishonest anthropomorphism. In: Proceedings of the Conference on fairness, accountability, and transparency. Association for Computing Machinery, New York, pp 299–308

  • Levin J (2018) Functionalism. In: Zalta EN (ed) The Stanford Encyclopedia of philosophy, Fall 2018. Metaphysics Research Lab, Stanford University

  • Levy D (2008) Love and sex with robots: the evolution of human-robot relationships. Harper Perennial, New York

    Google Scholar 

  • Locke J (1998) An essay concerning human understanding [1689], Revised. Penguin Classics, London

    Google Scholar 

  • Low P (2012) The Cambridge declaration on consciousness. In: Panskepp J, Reiss D, Edelman D, et al. (eds). Churchill College, University of Cambridge

  • Markoff J (2015) Machines of loving grace: the quest for common ground between humans and robots. Ecco, New York

    Google Scholar 

  • Mauss M (1985) A category of the human mind: the notion of person; the notion of self. In: Carrithers M, Collins S, Lukes S, Mauss M (eds) The category of the person: anthropology, philosophy, history. Cambridge University Press, Cambridge

  • Mbiti JS (1990) African religions & philosophy, 2nd edn. Heinemann, Portsmouth

    Google Scholar 

  • Menkiti IA (1984) Person and community in African traditional thought. In: Wright RA (ed) African philosophy, 3rd edn. University Press of America, Lanham, pp 171–182

    Google Scholar 

  • Misselhorn C (2010) Empathy and dyspathy with androids: philosophical, fictional, and (neuro) psychological perspectives. Konturen 2:101–123. https://doi.org/10.5399/uo/konturen.2.1.1341

    Article  Google Scholar 

  • Nagel T (1974) What is it like to be a bat? Philos Rev 83:435. https://doi.org/10.2307/2183914

    Article  Google Scholar 

  • Nagel T (1978) Panpsychism. Mortal questions. Cambridge University Press, Cambridge, pp 181–195

    Google Scholar 

  • Najork M (2016) Using machine learning to improve the email experience. In: Proceedings of the 25th ACM International Conference on information and knowledge management. p 891

  • Newell A, Simon HA (1976) Computer science as empirical inquiry: symbols and search. Commun ACM 19:113–126. https://doi.org/10.1145/360018.360022

    Article  MathSciNet  Google Scholar 

  • Ogude J (2018) Ubuntu and personhood. Africa World Press, Trenton

    Google Scholar 

  • Open letter to the European Commission (2016). http://www.robotics-openletter.eu/

  • Pearl J (2019) The limitations of opaque learning machines. In: Brockman J (ed) Possible minds: twenty-five ways of looking at AI, 1st edn. Penguin Press, New York, pp 13–19

    Google Scholar 

  • Piccinini G (2010) The mind as neural software? understanding functionalism, computationalism, and computational functionalism. Philos Phenomenol Res 81:269–311. https://doi.org/10.1111/j.1933-1592.2010.00356.x

    Article  Google Scholar 

  • Preuss HD (1995) Old testament theology. Westminster John Knox Press, Louisville

    Google Scholar 

  • Reeves B, Nass C (2003) The media equation: how people treat computers, television, and new media like real people and places, New Edition. CSLI, Stanford

  • Richardson K (2015) The asymmetrical ‘relationship’: parallels between prostitution and the development of sex robots. SIGCAS Comput Soc 45:290–293. https://doi.org/10.1145/2874239.2874281

    Article  Google Scholar 

  • Richardson K (2016a) Are sex robots as bad as killing robots? In: Seibt J, Nørskov M, Andersen SS (eds) What social robots can and should do: proceedings of robophilosophy 2016/TRANSOR 2016. IOS Press, Amsterdam, pp 27–31

    Google Scholar 

  • Richardson K (2016b) Sex robot matters: slavery, the prostituted, and the rights of machines. IEEE Technol Soc Mag 35:46–53. https://doi.org/10.1109/MTS.2016.2554421

    Article  Google Scholar 

  • Rist JM (2020) What is a Person?: Realities, constructs, illusions, 1st edn. Cambridge University Press, New York

    Google Scholar 

  • Russell S, Norvig P (2009) Artificial intelligence: a modern approach, 3rd edn. Pearson, Upper Saddle River

    MATH  Google Scholar 

  • Salmond JW (1902) Jurisprudence, Or the Theory of the Law. Stevens and Haynes, London

    Google Scholar 

  • Sandry E (2015) Robots and communication. Palgrave Pivot, New York

    Book  Google Scholar 

  • Schafer WR (2018) The worm connectome: back to the future. Trends Neurosci 41:763–765. https://doi.org/10.1016/j.tins.2018.09.002

    Article  Google Scholar 

  • Schweizer P (2019a) Triviality arguments reconsidered. Mind Mach 29:287–308. https://doi.org/10.1007/s11023-019-09501-x

    Article  Google Scholar 

  • Schweizer P (2019b) Computation in physical systems: a normative mapping account. In: Berkich D, d’Alfonso MV (eds) On the cognitive, ethical, and scientific dimensions of artificial intelligence: themes from IACAP 2016. Springer International Publishing, Cham, pp 27–47

    Chapter  Google Scholar 

  • Searle JR (1980) Minds, brains, and programs. Behav Brain Sci 3:417–457

    Article  Google Scholar 

  • Searle JR (1992) The rediscovery of the mind. MIT Press, Cambridge

    Book  Google Scholar 

  • Siedentop L (2014) Inventing the individual: the origins of western liberalism, 1st edn. Belknap Press, Cambridge

    Book  Google Scholar 

  • Singer P (2009) Speciesism and moral status. Metaphilosophy 40:567–581

    Article  Google Scholar 

  • Solum LB (1992) Legal personhood for artificial intelligences. N C Law Rev 70:1231–1287

    Google Scholar 

  • Spaemann R (2006) Persons: the difference between “someone” and “something.” Oxford University Press, New York

    Google Scholar 

  • Strawson PF (1959) Individuals: an essay in descriptive metaphysics. Methuen & Co, London

    Google Scholar 

  • Stueber K (2019) Empathy. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/empathy/

  • Taylor C (1985) The person. In: Carrithers M, Collins S, Lukes S (eds) The category of the person, First Paperback Edition. Cambridge University Press, Cambridge, pp 257–281

  • Tegmark M (2017) Life 3.0: being human in the age of artificial intelligence. Knopf, New York

    Google Scholar 

  • Tertullian of Carthage (2011) Against Praxeas [ca. 213]: The text edited, with an introduction, translation, and commentary, bilingual, Reprint Edition. Wipf & Stock Pub, Eugene

  • Turing AM (1950) Computing machinery and intelligence. Mind New Ser 59:433–460

    Article  MathSciNet  Google Scholar 

  • Turner J (2018) Robot rules: regulating artificial intelligence, 1st edn. Palgrave Macmillan, Cham, Switzerland

    Google Scholar 

  • United States Supreme Court (1964) Jacobellis v. Ohio, 378 U.S. 184

  • Van Nieuwenhove R (2017) Contemplation, intellectus, and simplex Intuitus in Aquinas: recovering a neoplatonic theme. Am Cathol Philos Q 91:199–225. https://doi.org/10.5840/acpq2017227108

    Article  Google Scholar 

  • Wales JJ (2018) Contemplative compassion: Gregory the great’s development of Augustine’s views on love of neighbor and likeness to God. Augustin Stud 49:199–219. https://doi.org/10.5840/augstudies201861144

    Article  Google Scholar 

  • Wiener N (1988) The human use of human beings: cybernetics and society, New Edition. Da Capo Press, New York

  • Williams TD, Bengtsson JO (2018) Personalism. Stanford Encyclopaedia of Philosophy. https://plato.stanford.edu/entries/personalism/

  • Žižek S (2006) Philosophy, the “unknown knowns”, and the public use of reason. Topoi 25:137–142. https://doi.org/10.1007/s11245-006-0021-2

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David J. Gunkel.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gunkel, D.J., Wales, J.J. Debate: what is personhood in the age of AI?. AI & Soc 36, 473–486 (2021). https://doi.org/10.1007/s00146-020-01129-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01129-1

Keywords

Navigation