Introduction

Recent decades have seen a substantial increase in human interaction with artificial entities. Robots manufacture goods (Shneier & Bostelman, 2015), care for the elderly (van Wynsberghe, 2013), and manage our homes (Young et al., 2009). Simulations are used for entertainment (Granic et al., 2014), military training (Cioppa et al., 2004), and scientific research (Terstappen & Reggiani, 2001). Further breakthroughs in artificial intelligence or space exploration may facilitate a vast proliferation of artificial entities (Reese, 2018; Baum et al., 2019; Anthis and Paez, 2021; Bostrom, 2003). Their increasing numbers and ubiquity raise an important question of moral consideration.

Policy-makers have begun to engage with this question. A 2006 paper commissioned by the U.K. Office of Science argued that robots could be granted rights within 50 years (BBC, 2006). South Korea proposed a “robot ethics charter” in 2007 (Yoon-mi, 2010). Paro, a type of care robot in the shape of a seal, was granted a “koseki” (household registry) in Nanto, Japan in 2010 (Robertson, 2014). The European Parliament passed a resolution in 2017 suggesting the creation of “a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons” (European Parliament Committee on Legal Affairs, 2017). In the same year, a robot named Sophia was granted citizenship in Saudi Arabia (Hanson Robotics, 2018) and a chatbot on the messaging app Line, named Shibuya Mirai, was granted residence by the city of Tokyo in Japan (Microsoft Asia News Center, 2017).

Policy decisions relating to the rights of artificial entities have been reported in the media (Browne, 2017; Maza, 2017; Reynolds, 2018; Weller, 2020), discussed by the public,Footnote 1 and critiqued by academics (Open Letter, 2018). The moral consideration of artificial entities has also been explored extensively in science fiction (McNally & Inayatullah, 1988, p. 128; Petersen, 2007, pp. 43–4; Robertson, 2014, p. 573–4; Inyashkin, 2016; Kaminska, 2016; Arnold & Gough, 2017; Gunkel, 2018a, pp. 13–8; Hallqvist, 2018; Kunnari, 2020). People for the Ethical Treatment of Reinforcement Learners have explicitly advocated for the moral consideration of artificial entities that can suffer (PETRL, 2015) and The American Society for the Prevention of Cruelty to Robots have done so for those that are “self-aware” (Anderson, 2015).

Scholars often conclude that artificial entities with the capacity for positive and negative experiences (i.e. sentience) will be created, or are at least theoretically possible (see, for example, Thompson, 1965; Aleksander, 1996; Buttazzo, 2001; Blackmore, 1999; Franklin, 2003; Harnad, 2003; Holland, 2007; Chrisley, 2008; Seth, 2009; Haikonen, 2012; Bringsjord et al., 2015; Reese, 2018; Anthis and Paez, 2021; Angel, 2019). Surveys of cognitive scientists (Francken et al., 2021) and artificial intelligence researchers (McDermott, 2007) suggest that many are open to this possibility. Tomasik (2011), Bostrom (2014), Gloor (2016a), and Sotala and Gloor (2017) argue that the insufficient moral consideration of sentient artificial entities, such as the subroutines or simulations run by a future superintelligent AI, could lead to astronomical amounts of suffering. Kelley and Atreides (2020) have already proposed a “laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences.”

There has been limited synthesis of relevant literature to date. Gunkel (2018a) provides the most thorough review to set up his argument about “robot rights,” categorizing contributions into four modalities: “Robots Cannot Have Rights; Robots Should Not Have Rights,” “Robots Can Have Rights; Robots Should Have Rights,” “Although Robots Can Have Rights, Robots Should Not Have Rights,” and “Even if Robots Cannot Have Rights, Robots Should Have Rights.” Gunkel critiques each of these perspectives, advocating instead for “thinking otherwise” via deconstruction of the questions of whether robots can and should have rights. Bennett and Daly (2020) more briefly summarize the literature on these two questions, adding a third: “will robots be granted rights?” They focus on legal rights, especially legal personhood and intellectual property rights. Tavani (2018) briefly reviews the usage of “robot” and “rights,” the criteria necessary for an entity to warrant moral consideration, and whether moral agency is a prerequisite for moral patiency, in order to explain a new argument that social robots warrant moral consideration.

However, those reviews have not used systematic methods to comprehensively identify relevant publications or quantitative methods of analysis, making it difficult to extract general trends and themes.Footnote 2 Do scholars tend to believe that artificial entities warrant moral consideration? Are views split along geographical and disciplinary lines? Which nations, disciplines, and journals most frequently provide contributions to the discussion? Using a systematic search methodology, we address these questions, provide an overview of the literature, and suggest opportunities for further research. Common in social science and clinical research (see, for example, Higgins and Green, 2008; Campbell Collaboration, 2014), systematic reviews have recently been used in philosophy and ethics research (Nill & Schibrowsky, 2007; Mittelstadt, 2017; Hess and Fore, 2017; Saltz & Dewar, 2019; Yi et al., 2019).

Previous reviews have also tended to focus on “robot rights.” Our review has a broader scope. We use the term “artificial entities” to refer to all manner of machines, computers, artificial intelligences, simulations, software, and robots created by humans or other entities. We use the phrase “moral consideration” of artificial entities to collectively refer to a number of partly overlapping discussions: whether artificial entities are “moral patients,” deserve to be included in humanity’s moral circle, should be granted “rights,” or should otherwise be granted consideration. Moral consideration does not necessarily imply the attribution of intrinsic moral value. While not the most common,Footnote 3 these terms were chosen for their breadth.

Methodology

Four scientific databases (Scopus, Web of Science, ScienceDirect, and the ACM Digital Library) were searched systematically for relevant items in August and September 2020. Google Scholar was also searched, since this search engine is sometimes more comprehensive, particularly in finding the grey literature that is essential to cataloguing an emerging field (Martín-Martín et al., 2019).

Given that there is no single, established research field examining the moral consideration of artificial entities, multiple searches were conducted to identify relevant items; a total of 2692 non-unique items were screened for inclusion (see Table 1). After exclusions (see criteria below) and removal of duplicates, 294 relevant research or discussion items were included (see Table 2; see the “Appendix” for item summaries and analysis).

Table 1 Initial results returned for screening, by search terms and search location
Table 2 Included items, by search terms and search location

For the database searches, the titles and abstracts of returned items were reviewed to determine relevance. For the Google Scholar searches, given the low relevance of some returned results, review was limited to the first 200 results, similar to the approach of Mittelstadt (2017).

Common reasons for exclusion were that the item:

  • Did not discuss the moral consideration of artificial entities (e.g. discussed whether artificial entities could be moral agents but not whether they could be moral patientsFootnote 4),

  • Mentioned the topic only very briefly (e.g. only as thought-provoking issue adjacent to the main focus of the article), or

  • Were not in the format of an academic article, book, conference paper, or peer-reviewed magazine contribution (e.g. they were published as a newspaper op-ed or blog postFootnote 5).

The findings are analyzed qualitatively and discussed in the sections below. Results are also categorized and scored along the following dimensions:

  • Categories of the search terms that identified each item, which reflect the language used by the authors; the three categories used are “rights,” “moral,” and “suffering” searches,

  • Categories of academic disciplines of the lead author of each included item,

  • Categories of primary frameworks or moral schemas used, similar to the approach of Hess and Fore (2017), and

  • A score representing the author’s position on granting moral consideration to artificial entities on a scale from 1 (argues forcefully against consideration, e.g. suggesting that artificial beings should never be considered morally) to 5 (argues forcefully for consideration, e.g. suggesting that artificial beings deserve moral consideration now).

In addition to the discussion below, the “Appendix” includes a summary of each item and the full results of the categorization and scoring analyses.

Results

Descriptive Statistics

Included items were published in 106 different journals. Four journals published more than five of the included items; Ethics and Information Technology (9% of items), AI and Society (4%), Philosophy and Technology (2%), and Science and Engineering Ethics (2%). Additionally, 15% of items were books or chapters (only one book focused solely on this topic was identified, Gunkel, 2018a),Footnote 6 13% were entries in a report of a conference, workshop, or symposium (often hosted by the Association for Computing Machinery or Institute of Electrical and Electronics Engineers), and 12% were not published in any journal, magazine, or book.

The included items were produced by researchers affiliated with institutions based in 43 countries. Only five countries produced more than 10 of the identified items: the United States (36% of identified items), the United Kingdom (15%), the Netherlands (7%), Australia (5%), and Germany (4%). According to Google Scholar, included items have been cited 5992 times (excluding one outlier with 2513 citations, Bostrom, 2014); 41% of these citations are of items produced in the US.Footnote 7

The oldest included item identified by the searches was McNally and Inayatullah (1988), though included items cited articles from as early as 1964 as offering relevant comments (Freitas, 1985; Lehman-Wilzig, 1981; Putman, 1964; Stone, 1974). The study of robot ethics (now called “roboethics” by some (Veruggio & Abney, 2012, pp. 347–8)) grew in the early 2000s (Malle, 2016). Levy (2009), Torrance (2013, p. 403), and Gunkel (2018c, p. 87) describe the moral consideration of artificial entities as a small and neglected sub-field. However, the results of this literature review suggest that academic interest in the moral consideration of artificial entities is growing exponentially (see Fig. 1).

Fig. 1
figure 1

Cumulative total of included items, by date of publication

As shown in Table 3, the most common academic disciplines of contributing scholars are philosophy or ethics, law, computer engineering or computer science, and communication or media. We focus on the primary disciplines of scholars, rather than of publications, because so many of the publications are interdisciplinary.

Table 3 Items and citations by the academic discipline of the lead author

As shown in Table 4, many scholars contributing to the discussion do not adopt a single, clear moral schema, focusing instead on legal precedent, empirical evidence of attitudes towards artificial entities, or simply summarizing the views of previous scholars (e.g. Weng et al., 2009, p. 267; Gray & Wegner, 2012, pp. 125–30).

Table 4 Items and citations by the primary framework or moral schema used

Many scholars use consequentialist, deontological, or virtue ethicist moral frameworks, or a mixture of these. These scholars defend various criteria as crucial for determining whether artificial entities warrant moral consideration. Sentience or consciousness seem to be most frequently invoked (Andreotta, 2020; Bostrom, 2014; Himma, 2003; Johnson & Verdicchio, 2018; Mackenzie, 2014; Mosakas, 2020; Tomasik, 2014; Torrance, 2008; Yampolskiy, 2017), but other proposed criteria include the capacities for interests (Basl, 2014; Neely, 2014), autonomy (Calverley, 2011; Gualeni, 2020), self-control (Wareham, 2013), rationality (Laukyte, 2017), integrity (Gualeni, 2020), dignity (Bess, 2018), moral reasoning (Malle, 2016), and virtue (Gamez et al., 2020).

Some of the most influential scholars propose more novel ethical frameworks. Coeckelbergh (2010a, 2010b, 2014, 2018, 2020) and Gunkel (2013, 2014, 2015, 2018a, 2018b, 2018c, 2018d, 2019a, 2019b, 2020b), encourage a social-relational framework to discuss the moral consideration of artificial entities. This approach grants moral consideration on the basis of how an entity “is treated in actual social situations and circumstances” (Gunkel, 2018a, p. 10). Floridi (1999, 2002, 2005) encourages “information ethics,” where “[a]ll entities, qua informational objects, have an intrinsic moral value.” Though less widely cited, Danaher’s (2020) theory of “ethical behaviorism” and Tavani’s (2018) discussion of “being-in-the-technological-world” arguably offer alternative moral frameworks for assessing whether artificial entities warrant moral consideration. Non-Western frameworks also differ in their implications for the moral consideration of artificial entities (Gunkel, 2020a; McNally & Inayatullah, 1988).

Focus and Terminology

Definitions of the widely-used term “robot” are varied and often vague (Lin et al., 2011, pp. 943–4; Robertson, 2014, p. 574; Tavani, 2018, pp. 2–3; Gunkel, 2018a, pp. 14–26; Beno, 2019, pp. 2–3). It can be defined broadly, such as “a machine that resembles a living creature in being capable of moving independently (as by walking or rolling on wheels) and performing complex actions (such as grasping and moving objects)” (Merriam-Webster, 2008). More narrowly, to many people, the term robot implies a humanoid appearance, or at least humanoid functions and behaviors (Brey & Søraker, 2009; Leenes & Lucivero, 2014; Rademeyer, 2017). This terminology seems suboptimal, given that the forms of artificial sentience that seem most at risk of experiencing intense suffering on a large scale in the long-term future may not have humanoid characteristics or behaviors; they may even exist entirely within computers, not having any embodied form, human or otherwise.Footnote 8 Other terms used by scholars include “artificial beings” (Gualeni, 2020), “artificial consciousness” (Basl, 2013b), “artificial entities” (Gunkel, 2015), “artificial intelligence” (Ashrafian, 2015b), “artificial life” (Sullins, 2005), “artificial minds” (Jackson Jr, 2018a), “artificial person” (Michalski, 2018), “artificial sentience” (Ziesche & Yampolskiy, 2019), “machines” (Church, 2019), “automata” (Miller, 2015), computers (Drozdek, 1994), “simulations” (Bostrom, 2014), and “subroutines” (Winsby, 2013). Alternative adjectives such as “synthetic,” “electronic,” and “digital” are also sometimes used to replace “artificial.”Footnote 9

Relevant discussion has often focused on the potential “rights” of artificial entities (Tavani, 2018, pp. 2–7; Gunkel, 2018a, pp. 26–33). There has been some debate over whether “rights” is the most appropriate term, given its ambiguity and that legal and moral rights are each only one mechanism for moral consideration (Kim & Petrina, 2006, p. 87; Tavani, 2018, pp. 4–5; Cappuccio et al., 2020, p. 4). Other scholars consider whether artificial entities can be “moral patients,” granted “moral consideration,” or included in the “moral circle” (Cappuccio et al., 2020; Danaher, 2020; Küster & Świderska, 2016). Some scholars use terminology that focuses on the suffering of specific forms of artificial sentience: “mind crime” against simulations (Bostrom, 2014), “suffering subroutines” (Tomasik, 2011), or “risks of astronomical future suffering” (Tomasik, 2011) and the derivative term “s-risks.”

There were more items found by “rights” or “moral” than “suffering” search terms (see Table 5). Although 31% of the items identified by “rights” search terms were also identified by “moral” search terms, only 12% of the results from the “suffering” search terms were also identified by “rights” or “moral” search terms. Additionally, excluding one outlier—Bostrom (2014)—items identified via the “suffering” search terms had a lower average citation count (8) than items identified via “moral” (24) or “rights” (20) search terms. If the outlier is included, then the average for the suffering search terms is over ten times larger (108), and these items comprise 32% of the total citations (see “Appendix”).

Table 5 Items and citations by search term category

The terminology used varied by the authors’ academic discipline and moral framework. For example, the items by legal scholars were mostly identified by “rights” search terms (80%) while the items by psychologists were mostly identified by “moral” search terms (90%). In the “other or unidentifiable” category, 44% were identified via “suffering” search terms; these contributions were often by the Center on Long-Term Risk and other researchers associated with the effective altruism community.Footnote 10 An unusually high proportion of “consequentialist” items were identified by “suffering” search terms (50%). None of the “information ethics” items were identified via “rights” search terms, whereas an unusually high proportion of the “legal precedent” items were identified this way (94%).

The primary questions that are addressed in the identified literature are (1) Can or could artificial entities ever be granted moral consideration? (2) Should artificial entities be granted moral consideration?Footnote 11 The authors use philosophical arguments, ethical arguments, and arguments from legal precedent. They sometimes motivate their arguments with concern for the artificial entities themselves, but others argue in favor of the moral consideration of artificial entities because of positive indirect effects on human society, particularly on moral character (Levy, 2009; Davies, 2011; Darling, 2016, p. 215). Others argue against the moral consideration of artificial entities because of potentially damaging effects on human society (Bryson, 2018; Gerdes, 2016). Some items, especially those identified via the “moral” search terms, focus on a third question, (3) What attitudes do humans currently have vis-a-vis artificial entities, and what predicts these attitudes?Footnote 12 A small number of contributions, especially those identified via the “suffering” search terms, also explicitly discuss (4) What are the best approaches to ensuring that the suffering of artificial sentience is minimized or that other interests of artificial entities are protected (e.g. Ashrafian, 2015a; Gloor, 2016b)? Others ask (5) Should humanity avoid creating machines that are complex or intelligent enough that they warrant moral consideration (e.g. Basl, 2013a; Beckers, 2018; Bryson, 2018; Hanák, 2019; Johnson & Verdicchio, 2018; McLaughlin & Rose, 2018; Tomasik, 2013)?

Dismissal of the Importance of Moral Consideration of Artificial Entities

Calverley’s (2011) chapter in a book on Machine Ethics opens with the statement that, “[t]o some, the question of whether legal rights should, or even can, be given to machines is absurd on its face. How, they ask, can pieces of metal, silicon, and plastic, have any attributes that would allow society to assign it any rights at all.” Referring to his 1988 essay with Phil McNally, Sohail Inayatullah (2001) notes that he received substantial criticism from colleagues for writing about the topic of robot rights: “Pakistani colleagues have mocked me saying that Inayatullah is worried about robot rights while we have neither human rights, economic rights or rights to our own language and local culture… Others have refused to enter in collegial discussions on the future with me as they have been concerned that I will once again bring up the trivial.”

Some scholars dismiss discussion of the moral consideration of artificial entities as premature or frivolous, a distraction from concerns that they view as more pressing, usually concerns about the near-term consequences of developments in narrow artificial intelligence and social robots. For example, Birhane and van Dijk (2020) argue that, “the ‘robot rights’ debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society’s least privileged individuals.” Cappuccio et al. (2020, p. 3) suggest that arguments in favor of moral consideration for artificial entities that refer to “objective qualities or features, such as freedom of will or sentience” are “problematic because existing social robots are too unsophisticated to be considered sentient”; robots “do not display—and will hardly acquire any time soon—any of the objective cognitive prerequisites that could possibly identify them as persons or moral patients (e.g., self-awareness, autonomous decision, motivations, preferences).” This resembles critiques offered by Coeckelbergh (2010b) and Gunkel (2018c). McLaughlin and Rose (2018) refer to such “objective qualities” but note that, “[r]obot-rights seem not to be much of an issue” in roboethics because “the robots in question will be neither sentient nor genuinely intelligent… for the foreseeable future.”

Gunkel (2018a, pp. 33–44) provides a number of other examples of critics arguing that discussion of the moral consideration of artificial entities is “ridiculous,” as well as cases where it is “given some brief attention only to be bracketed or carefully excluded as an area that shall not be given further thought” or “included by being pushed to the margins of proper consideration.”

Despite these attitudes, our analysis shows that academic discussion of the moral consideration of artificial entities is increasing (see Fig. 1). This provides evidence that many scholars believe this topic is worth addressing. Indeed, Ziesche and Yampolskiy (2018, p. 2) have proposed the development and formalization of a field of “AI welfare science.” They suggest that, “[t]he research should target both aspects for sentient digital minds not to suffer anymore, but also for sentient and non-sentient digital minds not to cause suffering of other sentient digital minds anymore.”

Moreover, these dismissals do not engage with the long-term moral risks discussed in items identified via the “suffering” search terms. Wright (2019) briefly considers the “longer-term” consequences of granting “constitutional rights” to “advanced robots,” noting that doing so might spread resources thinly, but this is one of the only items not identified by the “suffering” search terms that explicitly considers the long-term future.Footnote 13

Attitudes Towards the Moral Consideration of Artificial Entities Among Contributing Scholars

We might expect different moral frameworks to have radically different implications for attitudes towards the appropriate treatment of artificial entities. Even where scholars share similar moral frameworks, their overall attitudes sometimes differ due to varying timeframes of evaluation or estimations of the likelihood that artificial entities will develop relevant capacities, among other reasons. For example, many scholars use sentience or consciousness as the key criterion determining whether an artificial entity is worthy of moral consideration, and most of these scholars remain open to the possibility that these entities will indeed become sentient in the future. Bryson et al. (2017) view consciousness as an important criterion but note that, “there is no guarantee or necessity that AI [consciousness] will be developed.”

The average consideration score (on a scale of 1 to 5) was 3.8 (standard deviation of 0.86) across the 192 items for which a score was assigned, indicating widespread, albeit not universal, agreement among scholars that at least some artificial entities could warrant moral consideration in the future, if not also the present. Where there is enough data to make meaningful comparisons, there is not much difference in average consideration score by country, academic discipline, or the primary framework or moral schema used (see “Appendix”).

However, our search terms will have captured only those scholars who deem the subject worthy of at least a passing mention. Other scholars interested in roboethics who consider the subject so “absurd,” “ridiculous,” “fanciful,” or simply irrelevant to their own work that they do not refer to the relevant literature will not have been identified. Bryson’s (2010) article “Robots Should be Slaves,” which argues against the moral consideration of current robots and against creating robots that can suffer, though cited 183 times, was not identified by the searches conducted here because of the terminology used in the article.

Individuals in disciplines associated with technical research on AI and robotics may be, on average, more hostile to granting moral consideration to artificial entities than researchers from other disciplines. We found that computer engineers and computer scientists had a lower average consideration score than other disciplines (2.6). Additionally, there are many roboticist and AI researcher signatories of the “Open Letter to the European Commission Artificial Intelligence and Robotics” (2018), which objects to a proposal of legal personhood for artificial entities, and when discussion of robot rights has gained media attention, many of the vocal critics appear to have been associated with computer engineering or robotics (Randerson, 2007; Yoon-mi, 2010; Gunkel, 2018a, pp. 35–6). Relatedly, Zhang and Dafoe (2019) found in their US survey that respondents with computer science or engineering degrees “rate all AI governance challenges as less important” than other respondents. In this sense, resistance to the moral consideration of artificial entities may fall under a general category of “AI governance” or “AI ethics,” which technical researchers may see as less important than other stakeholders. These technical researchers may not disagree with the proponents of moral consideration of artificial entities; they may simply have a different focus, such as incremental technological progress rather than theorizing about societal trajectories.

Empirical Research on Attitudes Towards the Moral Consideration of Artificial Entities

Five papers (Hughes, 2005, Nakada, 2011, Nakada, 2012, Spence et al., 2018; Lima et al., 2020) included surveys testing whether individuals believe that artificial entities might plausibly warrant moral consideration in the future. Agreement with statements favorable to future moral consideration varied from 9.4 to 70%; given the variety of question wordings, participant nationalities, and sampling methods (students, online participants, or members of the World Transhumanist Association), general trends are difficult to extract.

There are a number of surveys and experiments on attitudes towards current artificial entities. Some of this research provides evidence that people empathize with artificial entities and respond negatively to actions that appear to harm or insult them (Darling, 2016; Freier, 2008; Rosenthal-von der Pütten et al., 2013; Suzuki et al., 2015). Bartneck and Keijsers (2020) found no significant difference between participants’ ratings of the moral acceptability of abuse towards a human or a robot, but other researchers have found evidence that current artificial entities are granted less moral consideration than humans (Slater et al., 2006; Gray et al., 2007; Bartneck & Hu, 2008; Küster & Świderska, 2016; Akechi et al., 2018; Sommer et al., 2019; Nijssen et al., 2019; Küster and Świderska, 2020).

Studies have found that people are more willing to grant artificial entities moral consideration when they have humanlike appearance (Küster et al., 2020; Nijssen et al., 2019), have high emotional (Nijssen et al., 2019; Lee et al., 2019) or mental capacities (Gray & Wegner, 2012; Nijssen et al., 2019; Piazza et al., 2014; Sommer et al., 2019), verbally respond to harm inflicted on them (Freier, 2008), or seem to act autonomously (Chernyak & Gary, 2016). There is also evidence that people in individual rather than group settings (Hall, 2005), with prior experience interacting with robots (Spence et al., 2018), or presented with information promoting support for robot rights, such as “examples of non-human entities that are currently granted legal personhood” (Lima et al., 2020) are more willing to grant artificial entities moral consideration. Other studies have examined the conditions under which people are most willing to attribute high mental capacities to artificial entities (Briggs et al., 2014; Fraune et al., 2017; Gray & Wegner, 2012; Küster & Swiderska, 2020; Küster et al., 2020; McLaughlin & Rose, 2018; Swiderska & Küster, 2018, 2020; Wallkötter et al., 2020; Wang & Krumhuber, 2018; Ward et al., 2013; Wortham, 2018).

Limitations

Given that interest in this topic is growing exponentially, this review inevitably misses many recent relevant contributions. For example, a Google Scholar search for “robot rights” in July 2021 limited to 2021 returns 152 results, including a qualitative review (Gordon & Pasvenskiene, 2021). The chosen search terms likely miss some relevant items. They assume some level of abstraction to discuss “rights,” “moral,” or “suffering” issues explicitly; discussion which implicitly addresses these issues (e.g. Elder, 2017) may not have been included. This review’s exclusion criteria maintain coherence and concision but limit its scope. Future reviewers could adopt different foci, such as including discussion of the moral agency of artificial entities or contributions not using academic formats.

Concluding Remarks

Many scholars lament that the moral consideration of artificial entities is discussed infrequently and not viewed as a proper object of academic inquiry. This literature review suggests that these perceptions are no longer entirely accurate. The number of publications is growing exponentially, and most scholars view artificial entities as potentially warranting moral consideration. Still, there are important gaps remaining, suggesting promising opportunities for further research, and the field remains small overall with only 294 items identified in this review.

These discussions have taken place largely separately from each other: legal rights, moral consideration, empirical research on human attitudes, and theoretical exploration of the risks of astronomical suffering among future artificial entities. Further contributions should seek to better integrate these discussions. The analytical frameworks used in one topic may offer valuable contributions to another. For example, what do legal precedent and empirical psychological research suggest are the most likely outcomes for future artificial sentience (as an example of studying likely technological outcomes, see Reese and Mohorčich, 2019)? What do virtue ethics and rights theories suggest is desirable in these plausible future scenarios?

Despite interest in the topic from policy-makers and the public, there is a notable lack of empirical data about attitudes towards the moral consideration of artificial entities. This leaves scope for surveys and focus groups on a far wider range of predictors of attitudes, experiments that test the effect of various messages and content on these attitudes, and qualitative and computational text analysis of news articles, opinion pieces, and science fiction books and films that touch on these topics. There are also many theoretically interesting questions to be asked about how these attitudes relate to other facets of human society, such as human in-group-out-group and human-animal interactions.