Abstract
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
Introduction
Recent decades have seen a substantial increase in human interaction with artificial entities. Robots manufacture goods (Shneier & Bostelman, 2015), care for the elderly (van Wynsberghe, 2013), and manage our homes (Young et al., 2009). Simulations are used for entertainment (Granic et al., 2014), military training (Cioppa et al., 2004), and scientific research (Terstappen & Reggiani, 2001). Further breakthroughs in artificial intelligence or space exploration may facilitate a vast proliferation of artificial entities (Reese, 2018; Baum et al., 2019; Anthis and Paez, 2021; Bostrom, 2003). Their increasing numbers and ubiquity raise an important question of moral consideration.
Policy-makers have begun to engage with this question. A 2006 paper commissioned by the U.K. Office of Science argued that robots could be granted rights within 50 years (BBC, 2006). South Korea proposed a “robot ethics charter” in 2007 (Yoon-mi, 2010). Paro, a type of care robot in the shape of a seal, was granted a “koseki” (household registry) in Nanto, Japan in 2010 (Robertson, 2014). The European Parliament passed a resolution in 2017 suggesting the creation of “a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons” (European Parliament Committee on Legal Affairs, 2017). In the same year, a robot named Sophia was granted citizenship in Saudi Arabia (Hanson Robotics, 2018) and a chatbot on the messaging app Line, named Shibuya Mirai, was granted residence by the city of Tokyo in Japan (Microsoft Asia News Center, 2017).
Policy decisions relating to the rights of artificial entities have been reported in the media (Browne, 2017; Maza, 2017; Reynolds, 2018; Weller, 2020), discussed by the public,Footnote 1 and critiqued by academics (Open Letter, 2018). The moral consideration of artificial entities has also been explored extensively in science fiction (McNally & Inayatullah, 1988, p. 128; Petersen, 2007, pp. 43–4; Robertson, 2014, p. 573–4; Inyashkin, 2016; Kaminska, 2016; Arnold & Gough, 2017; Gunkel, 2018a, pp. 13–8; Hallqvist, 2018; Kunnari, 2020). People for the Ethical Treatment of Reinforcement Learners have explicitly advocated for the moral consideration of artificial entities that can suffer (PETRL, 2015) and The American Society for the Prevention of Cruelty to Robots have done so for those that are “self-aware” (Anderson, 2015).
Scholars often conclude that artificial entities with the capacity for positive and negative experiences (i.e. sentience) will be created, or are at least theoretically possible (see, for example, Thompson, 1965; Aleksander, 1996; Buttazzo, 2001; Blackmore, 1999; Franklin, 2003; Harnad, 2003; Holland, 2007; Chrisley, 2008; Seth, 2009; Haikonen, 2012; Bringsjord et al., 2015; Reese, 2018; Anthis and Paez, 2021; Angel, 2019). Surveys of cognitive scientists (Francken et al., 2021) and artificial intelligence researchers (McDermott, 2007) suggest that many are open to this possibility. Tomasik (2011), Bostrom (2014), Gloor (2016a), and Sotala and Gloor (2017) argue that the insufficient moral consideration of sentient artificial entities, such as the subroutines or simulations run by a future superintelligent AI, could lead to astronomical amounts of suffering. Kelley and Atreides (2020) have already proposed a “laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences.”
There has been limited synthesis of relevant literature to date. Gunkel (2018a) provides the most thorough review to set up his argument about “robot rights,” categorizing contributions into four modalities: “Robots Cannot Have Rights; Robots Should Not Have Rights,” “Robots Can Have Rights; Robots Should Have Rights,” “Although Robots Can Have Rights, Robots Should Not Have Rights,” and “Even if Robots Cannot Have Rights, Robots Should Have Rights.” Gunkel critiques each of these perspectives, advocating instead for “thinking otherwise” via deconstruction of the questions of whether robots can and should have rights. Bennett and Daly (2020) more briefly summarize the literature on these two questions, adding a third: “will robots be granted rights?” They focus on legal rights, especially legal personhood and intellectual property rights. Tavani (2018) briefly reviews the usage of “robot” and “rights,” the criteria necessary for an entity to warrant moral consideration, and whether moral agency is a prerequisite for moral patiency, in order to explain a new argument that social robots warrant moral consideration.
However, those reviews have not used systematic methods to comprehensively identify relevant publications or quantitative methods of analysis, making it difficult to extract general trends and themes.Footnote 2 Do scholars tend to believe that artificial entities warrant moral consideration? Are views split along geographical and disciplinary lines? Which nations, disciplines, and journals most frequently provide contributions to the discussion? Using a systematic search methodology, we address these questions, provide an overview of the literature, and suggest opportunities for further research. Common in social science and clinical research (see, for example, Higgins and Green, 2008; Campbell Collaboration, 2014), systematic reviews have recently been used in philosophy and ethics research (Nill & Schibrowsky, 2007; Mittelstadt, 2017; Hess and Fore, 2017; Saltz & Dewar, 2019; Yi et al., 2019).
Previous reviews have also tended to focus on “robot rights.” Our review has a broader scope. We use the term “artificial entities” to refer to all manner of machines, computers, artificial intelligences, simulations, software, and robots created by humans or other entities. We use the phrase “moral consideration” of artificial entities to collectively refer to a number of partly overlapping discussions: whether artificial entities are “moral patients,” deserve to be included in humanity’s moral circle, should be granted “rights,” or should otherwise be granted consideration. Moral consideration does not necessarily imply the attribution of intrinsic moral value. While not the most common,Footnote 3 these terms were chosen for their breadth.
Methodology
Four scientific databases (Scopus, Web of Science, ScienceDirect, and the ACM Digital Library) were searched systematically for relevant items in August and September 2020. Google Scholar was also searched, since this search engine is sometimes more comprehensive, particularly in finding the grey literature that is essential to cataloguing an emerging field (Martín-Martín et al., 2019).
Given that there is no single, established research field examining the moral consideration of artificial entities, multiple searches were conducted to identify relevant items; a total of 2692 non-unique items were screened for inclusion (see Table 1). After exclusions (see criteria below) and removal of duplicates, 294 relevant research or discussion items were included (see Table 2; see the “Appendix” for item summaries and analysis).
For the database searches, the titles and abstracts of returned items were reviewed to determine relevance. For the Google Scholar searches, given the low relevance of some returned results, review was limited to the first 200 results, similar to the approach of Mittelstadt (2017).
Common reasons for exclusion were that the item:
-
Did not discuss the moral consideration of artificial entities (e.g. discussed whether artificial entities could be moral agents but not whether they could be moral patientsFootnote 4),
-
Mentioned the topic only very briefly (e.g. only as thought-provoking issue adjacent to the main focus of the article), or
-
Were not in the format of an academic article, book, conference paper, or peer-reviewed magazine contribution (e.g. they were published as a newspaper op-ed or blog postFootnote 5).
The findings are analyzed qualitatively and discussed in the sections below. Results are also categorized and scored along the following dimensions:
-
Categories of the search terms that identified each item, which reflect the language used by the authors; the three categories used are “rights,” “moral,” and “suffering” searches,
-
Categories of academic disciplines of the lead author of each included item,
-
Categories of primary frameworks or moral schemas used, similar to the approach of Hess and Fore (2017), and
-
A score representing the author’s position on granting moral consideration to artificial entities on a scale from 1 (argues forcefully against consideration, e.g. suggesting that artificial beings should never be considered morally) to 5 (argues forcefully for consideration, e.g. suggesting that artificial beings deserve moral consideration now).
In addition to the discussion below, the “Appendix” includes a summary of each item and the full results of the categorization and scoring analyses.
Results
Descriptive Statistics
Included items were published in 106 different journals. Four journals published more than five of the included items; Ethics and Information Technology (9% of items), AI and Society (4%), Philosophy and Technology (2%), and Science and Engineering Ethics (2%). Additionally, 15% of items were books or chapters (only one book focused solely on this topic was identified, Gunkel, 2018a),Footnote 6 13% were entries in a report of a conference, workshop, or symposium (often hosted by the Association for Computing Machinery or Institute of Electrical and Electronics Engineers), and 12% were not published in any journal, magazine, or book.
The included items were produced by researchers affiliated with institutions based in 43 countries. Only five countries produced more than 10 of the identified items: the United States (36% of identified items), the United Kingdom (15%), the Netherlands (7%), Australia (5%), and Germany (4%). According to Google Scholar, included items have been cited 5992 times (excluding one outlier with 2513 citations, Bostrom, 2014); 41% of these citations are of items produced in the US.Footnote 7
The oldest included item identified by the searches was McNally and Inayatullah (1988), though included items cited articles from as early as 1964 as offering relevant comments (Freitas, 1985; Lehman-Wilzig, 1981; Putman, 1964; Stone, 1974). The study of robot ethics (now called “roboethics” by some (Veruggio & Abney, 2012, pp. 347–8)) grew in the early 2000s (Malle, 2016). Levy (2009), Torrance (2013, p. 403), and Gunkel (2018c, p. 87) describe the moral consideration of artificial entities as a small and neglected sub-field. However, the results of this literature review suggest that academic interest in the moral consideration of artificial entities is growing exponentially (see Fig. 1).
As shown in Table 3, the most common academic disciplines of contributing scholars are philosophy or ethics, law, computer engineering or computer science, and communication or media. We focus on the primary disciplines of scholars, rather than of publications, because so many of the publications are interdisciplinary.
As shown in Table 4, many scholars contributing to the discussion do not adopt a single, clear moral schema, focusing instead on legal precedent, empirical evidence of attitudes towards artificial entities, or simply summarizing the views of previous scholars (e.g. Weng et al., 2009, p. 267; Gray & Wegner, 2012, pp. 125–30).
Many scholars use consequentialist, deontological, or virtue ethicist moral frameworks, or a mixture of these. These scholars defend various criteria as crucial for determining whether artificial entities warrant moral consideration. Sentience or consciousness seem to be most frequently invoked (Andreotta, 2020; Bostrom, 2014; Himma, 2003; Johnson & Verdicchio, 2018; Mackenzie, 2014; Mosakas, 2020; Tomasik, 2014; Torrance, 2008; Yampolskiy, 2017), but other proposed criteria include the capacities for interests (Basl, 2014; Neely, 2014), autonomy (Calverley, 2011; Gualeni, 2020), self-control (Wareham, 2013), rationality (Laukyte, 2017), integrity (Gualeni, 2020), dignity (Bess, 2018), moral reasoning (Malle, 2016), and virtue (Gamez et al., 2020).
Some of the most influential scholars propose more novel ethical frameworks. Coeckelbergh (2010a, 2010b, 2014, 2018, 2020) and Gunkel (2013, 2014, 2015, 2018a, 2018b, 2018c, 2018d, 2019a, 2019b, 2020b), encourage a social-relational framework to discuss the moral consideration of artificial entities. This approach grants moral consideration on the basis of how an entity “is treated in actual social situations and circumstances” (Gunkel, 2018a, p. 10). Floridi (1999, 2002, 2005) encourages “information ethics,” where “[a]ll entities, qua informational objects, have an intrinsic moral value.” Though less widely cited, Danaher’s (2020) theory of “ethical behaviorism” and Tavani’s (2018) discussion of “being-in-the-technological-world” arguably offer alternative moral frameworks for assessing whether artificial entities warrant moral consideration. Non-Western frameworks also differ in their implications for the moral consideration of artificial entities (Gunkel, 2020a; McNally & Inayatullah, 1988).
Focus and Terminology
Definitions of the widely-used term “robot” are varied and often vague (Lin et al., 2011, pp. 943–4; Robertson, 2014, p. 574; Tavani, 2018, pp. 2–3; Gunkel, 2018a, pp. 14–26; Beno, 2019, pp. 2–3). It can be defined broadly, such as “a machine that resembles a living creature in being capable of moving independently (as by walking or rolling on wheels) and performing complex actions (such as grasping and moving objects)” (Merriam-Webster, 2008). More narrowly, to many people, the term robot implies a humanoid appearance, or at least humanoid functions and behaviors (Brey & Søraker, 2009; Leenes & Lucivero, 2014; Rademeyer, 2017). This terminology seems suboptimal, given that the forms of artificial sentience that seem most at risk of experiencing intense suffering on a large scale in the long-term future may not have humanoid characteristics or behaviors; they may even exist entirely within computers, not having any embodied form, human or otherwise.Footnote 8 Other terms used by scholars include “artificial beings” (Gualeni, 2020), “artificial consciousness” (Basl, 2013b), “artificial entities” (Gunkel, 2015), “artificial intelligence” (Ashrafian, 2015b), “artificial life” (Sullins, 2005), “artificial minds” (Jackson Jr, 2018a), “artificial person” (Michalski, 2018), “artificial sentience” (Ziesche & Yampolskiy, 2019), “machines” (Church, 2019), “automata” (Miller, 2015), computers (Drozdek, 1994), “simulations” (Bostrom, 2014), and “subroutines” (Winsby, 2013). Alternative adjectives such as “synthetic,” “electronic,” and “digital” are also sometimes used to replace “artificial.”Footnote 9
Relevant discussion has often focused on the potential “rights” of artificial entities (Tavani, 2018, pp. 2–7; Gunkel, 2018a, pp. 26–33). There has been some debate over whether “rights” is the most appropriate term, given its ambiguity and that legal and moral rights are each only one mechanism for moral consideration (Kim & Petrina, 2006, p. 87; Tavani, 2018, pp. 4–5; Cappuccio et al., 2020, p. 4). Other scholars consider whether artificial entities can be “moral patients,” granted “moral consideration,” or included in the “moral circle” (Cappuccio et al., 2020; Danaher, 2020; Küster & Świderska, 2016). Some scholars use terminology that focuses on the suffering of specific forms of artificial sentience: “mind crime” against simulations (Bostrom, 2014), “suffering subroutines” (Tomasik, 2011), or “risks of astronomical future suffering” (Tomasik, 2011) and the derivative term “s-risks.”
There were more items found by “rights” or “moral” than “suffering” search terms (see Table 5). Although 31% of the items identified by “rights” search terms were also identified by “moral” search terms, only 12% of the results from the “suffering” search terms were also identified by “rights” or “moral” search terms. Additionally, excluding one outlier—Bostrom (2014)—items identified via the “suffering” search terms had a lower average citation count (8) than items identified via “moral” (24) or “rights” (20) search terms. If the outlier is included, then the average for the suffering search terms is over ten times larger (108), and these items comprise 32% of the total citations (see “Appendix”).
The terminology used varied by the authors’ academic discipline and moral framework. For example, the items by legal scholars were mostly identified by “rights” search terms (80%) while the items by psychologists were mostly identified by “moral” search terms (90%). In the “other or unidentifiable” category, 44% were identified via “suffering” search terms; these contributions were often by the Center on Long-Term Risk and other researchers associated with the effective altruism community.Footnote 10 An unusually high proportion of “consequentialist” items were identified by “suffering” search terms (50%). None of the “information ethics” items were identified via “rights” search terms, whereas an unusually high proportion of the “legal precedent” items were identified this way (94%).
The primary questions that are addressed in the identified literature are (1) Can or could artificial entities ever be granted moral consideration? (2) Should artificial entities be granted moral consideration?Footnote 11 The authors use philosophical arguments, ethical arguments, and arguments from legal precedent. They sometimes motivate their arguments with concern for the artificial entities themselves, but others argue in favor of the moral consideration of artificial entities because of positive indirect effects on human society, particularly on moral character (Levy, 2009; Davies, 2011; Darling, 2016, p. 215). Others argue against the moral consideration of artificial entities because of potentially damaging effects on human society (Bryson, 2018; Gerdes, 2016). Some items, especially those identified via the “moral” search terms, focus on a third question, (3) What attitudes do humans currently have vis-a-vis artificial entities, and what predicts these attitudes?Footnote 12 A small number of contributions, especially those identified via the “suffering” search terms, also explicitly discuss (4) What are the best approaches to ensuring that the suffering of artificial sentience is minimized or that other interests of artificial entities are protected (e.g. Ashrafian, 2015a; Gloor, 2016b)? Others ask (5) Should humanity avoid creating machines that are complex or intelligent enough that they warrant moral consideration (e.g. Basl, 2013a; Beckers, 2018; Bryson, 2018; Hanák, 2019; Johnson & Verdicchio, 2018; McLaughlin & Rose, 2018; Tomasik, 2013)?
Dismissal of the Importance of Moral Consideration of Artificial Entities
Calverley’s (2011) chapter in a book on Machine Ethics opens with the statement that, “[t]o some, the question of whether legal rights should, or even can, be given to machines is absurd on its face. How, they ask, can pieces of metal, silicon, and plastic, have any attributes that would allow society to assign it any rights at all.” Referring to his 1988 essay with Phil McNally, Sohail Inayatullah (2001) notes that he received substantial criticism from colleagues for writing about the topic of robot rights: “Pakistani colleagues have mocked me saying that Inayatullah is worried about robot rights while we have neither human rights, economic rights or rights to our own language and local culture… Others have refused to enter in collegial discussions on the future with me as they have been concerned that I will once again bring up the trivial.”
Some scholars dismiss discussion of the moral consideration of artificial entities as premature or frivolous, a distraction from concerns that they view as more pressing, usually concerns about the near-term consequences of developments in narrow artificial intelligence and social robots. For example, Birhane and van Dijk (2020) argue that, “the ‘robot rights’ debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society’s least privileged individuals.” Cappuccio et al. (2020, p. 3) suggest that arguments in favor of moral consideration for artificial entities that refer to “objective qualities or features, such as freedom of will or sentience” are “problematic because existing social robots are too unsophisticated to be considered sentient”; robots “do not display—and will hardly acquire any time soon—any of the objective cognitive prerequisites that could possibly identify them as persons or moral patients (e.g., self-awareness, autonomous decision, motivations, preferences).” This resembles critiques offered by Coeckelbergh (2010b) and Gunkel (2018c). McLaughlin and Rose (2018) refer to such “objective qualities” but note that, “[r]obot-rights seem not to be much of an issue” in roboethics because “the robots in question will be neither sentient nor genuinely intelligent… for the foreseeable future.”
Gunkel (2018a, pp. 33–44) provides a number of other examples of critics arguing that discussion of the moral consideration of artificial entities is “ridiculous,” as well as cases where it is “given some brief attention only to be bracketed or carefully excluded as an area that shall not be given further thought” or “included by being pushed to the margins of proper consideration.”
Despite these attitudes, our analysis shows that academic discussion of the moral consideration of artificial entities is increasing (see Fig. 1). This provides evidence that many scholars believe this topic is worth addressing. Indeed, Ziesche and Yampolskiy (2018, p. 2) have proposed the development and formalization of a field of “AI welfare science.” They suggest that, “[t]he research should target both aspects for sentient digital minds not to suffer anymore, but also for sentient and non-sentient digital minds not to cause suffering of other sentient digital minds anymore.”
Moreover, these dismissals do not engage with the long-term moral risks discussed in items identified via the “suffering” search terms. Wright (2019) briefly considers the “longer-term” consequences of granting “constitutional rights” to “advanced robots,” noting that doing so might spread resources thinly, but this is one of the only items not identified by the “suffering” search terms that explicitly considers the long-term future.Footnote 13
Attitudes Towards the Moral Consideration of Artificial Entities Among Contributing Scholars
We might expect different moral frameworks to have radically different implications for attitudes towards the appropriate treatment of artificial entities. Even where scholars share similar moral frameworks, their overall attitudes sometimes differ due to varying timeframes of evaluation or estimations of the likelihood that artificial entities will develop relevant capacities, among other reasons. For example, many scholars use sentience or consciousness as the key criterion determining whether an artificial entity is worthy of moral consideration, and most of these scholars remain open to the possibility that these entities will indeed become sentient in the future. Bryson et al. (2017) view consciousness as an important criterion but note that, “there is no guarantee or necessity that AI [consciousness] will be developed.”
The average consideration score (on a scale of 1 to 5) was 3.8 (standard deviation of 0.86) across the 192 items for which a score was assigned, indicating widespread, albeit not universal, agreement among scholars that at least some artificial entities could warrant moral consideration in the future, if not also the present. Where there is enough data to make meaningful comparisons, there is not much difference in average consideration score by country, academic discipline, or the primary framework or moral schema used (see “Appendix”).
However, our search terms will have captured only those scholars who deem the subject worthy of at least a passing mention. Other scholars interested in roboethics who consider the subject so “absurd,” “ridiculous,” “fanciful,” or simply irrelevant to their own work that they do not refer to the relevant literature will not have been identified. Bryson’s (2010) article “Robots Should be Slaves,” which argues against the moral consideration of current robots and against creating robots that can suffer, though cited 183 times, was not identified by the searches conducted here because of the terminology used in the article.
Individuals in disciplines associated with technical research on AI and robotics may be, on average, more hostile to granting moral consideration to artificial entities than researchers from other disciplines. We found that computer engineers and computer scientists had a lower average consideration score than other disciplines (2.6). Additionally, there are many roboticist and AI researcher signatories of the “Open Letter to the European Commission Artificial Intelligence and Robotics” (2018), which objects to a proposal of legal personhood for artificial entities, and when discussion of robot rights has gained media attention, many of the vocal critics appear to have been associated with computer engineering or robotics (Randerson, 2007; Yoon-mi, 2010; Gunkel, 2018a, pp. 35–6). Relatedly, Zhang and Dafoe (2019) found in their US survey that respondents with computer science or engineering degrees “rate all AI governance challenges as less important” than other respondents. In this sense, resistance to the moral consideration of artificial entities may fall under a general category of “AI governance” or “AI ethics,” which technical researchers may see as less important than other stakeholders. These technical researchers may not disagree with the proponents of moral consideration of artificial entities; they may simply have a different focus, such as incremental technological progress rather than theorizing about societal trajectories.
Empirical Research on Attitudes Towards the Moral Consideration of Artificial Entities
Five papers (Hughes, 2005, Nakada, 2011, Nakada, 2012, Spence et al., 2018; Lima et al., 2020) included surveys testing whether individuals believe that artificial entities might plausibly warrant moral consideration in the future. Agreement with statements favorable to future moral consideration varied from 9.4 to 70%; given the variety of question wordings, participant nationalities, and sampling methods (students, online participants, or members of the World Transhumanist Association), general trends are difficult to extract.
There are a number of surveys and experiments on attitudes towards current artificial entities. Some of this research provides evidence that people empathize with artificial entities and respond negatively to actions that appear to harm or insult them (Darling, 2016; Freier, 2008; Rosenthal-von der Pütten et al., 2013; Suzuki et al., 2015). Bartneck and Keijsers (2020) found no significant difference between participants’ ratings of the moral acceptability of abuse towards a human or a robot, but other researchers have found evidence that current artificial entities are granted less moral consideration than humans (Slater et al., 2006; Gray et al., 2007; Bartneck & Hu, 2008; Küster & Świderska, 2016; Akechi et al., 2018; Sommer et al., 2019; Nijssen et al., 2019; Küster and Świderska, 2020).
Studies have found that people are more willing to grant artificial entities moral consideration when they have humanlike appearance (Küster et al., 2020; Nijssen et al., 2019), have high emotional (Nijssen et al., 2019; Lee et al., 2019) or mental capacities (Gray & Wegner, 2012; Nijssen et al., 2019; Piazza et al., 2014; Sommer et al., 2019), verbally respond to harm inflicted on them (Freier, 2008), or seem to act autonomously (Chernyak & Gary, 2016). There is also evidence that people in individual rather than group settings (Hall, 2005), with prior experience interacting with robots (Spence et al., 2018), or presented with information promoting support for robot rights, such as “examples of non-human entities that are currently granted legal personhood” (Lima et al., 2020) are more willing to grant artificial entities moral consideration. Other studies have examined the conditions under which people are most willing to attribute high mental capacities to artificial entities (Briggs et al., 2014; Fraune et al., 2017; Gray & Wegner, 2012; Küster & Swiderska, 2020; Küster et al., 2020; McLaughlin & Rose, 2018; Swiderska & Küster, 2018, 2020; Wallkötter et al., 2020; Wang & Krumhuber, 2018; Ward et al., 2013; Wortham, 2018).
Limitations
Given that interest in this topic is growing exponentially, this review inevitably misses many recent relevant contributions. For example, a Google Scholar search for “robot rights” in July 2021 limited to 2021 returns 152 results, including a qualitative review (Gordon & Pasvenskiene, 2021). The chosen search terms likely miss some relevant items. They assume some level of abstraction to discuss “rights,” “moral,” or “suffering” issues explicitly; discussion which implicitly addresses these issues (e.g. Elder, 2017) may not have been included. This review’s exclusion criteria maintain coherence and concision but limit its scope. Future reviewers could adopt different foci, such as including discussion of the moral agency of artificial entities or contributions not using academic formats.
Concluding Remarks
Many scholars lament that the moral consideration of artificial entities is discussed infrequently and not viewed as a proper object of academic inquiry. This literature review suggests that these perceptions are no longer entirely accurate. The number of publications is growing exponentially, and most scholars view artificial entities as potentially warranting moral consideration. Still, there are important gaps remaining, suggesting promising opportunities for further research, and the field remains small overall with only 294 items identified in this review.
These discussions have taken place largely separately from each other: legal rights, moral consideration, empirical research on human attitudes, and theoretical exploration of the risks of astronomical suffering among future artificial entities. Further contributions should seek to better integrate these discussions. The analytical frameworks used in one topic may offer valuable contributions to another. For example, what do legal precedent and empirical psychological research suggest are the most likely outcomes for future artificial sentience (as an example of studying likely technological outcomes, see Reese and Mohorčich, 2019)? What do virtue ethics and rights theories suggest is desirable in these plausible future scenarios?
Despite interest in the topic from policy-makers and the public, there is a notable lack of empirical data about attitudes towards the moral consideration of artificial entities. This leaves scope for surveys and focus groups on a far wider range of predictors of attitudes, experiments that test the effect of various messages and content on these attitudes, and qualitative and computational text analysis of news articles, opinion pieces, and science fiction books and films that touch on these topics. There are also many theoretically interesting questions to be asked about how these attitudes relate to other facets of human society, such as human in-group-out-group and human-animal interactions.
Data availability
The datasets generated during and/or analyzed during the current study are available in “Appendix”.
Code availability
Not applicable.
Change history
08 March 2022
A Correction to this paper has been published: https://doi.org/10.1007/s11948-022-00373-6
Notes
See, for example, the comments on Barsanti (2017).
Vakkuri and Abrahamsson (2018) use a systematic methodology to examine key words. However, only 83 academic papers are examined, with “rights” only being returned as a key word in two of the articles and “moral patiency” in another three.
See “Focus and terminology.” For examples of their use, see Küster and Swiderska (2020) and Coeckelbergh (2010b).
See, for example, Wallach et al. (2008). We recognize that some moral frameworks may see moral agency as an important criterion affecting moral consideration (see, for example, Wareham, 2013; Laukyte, 2017). However, this criterion seems less directly relevant and including it in this literature review would have substantially widened the scope. Evaluations of agency and patiency may be correlated, but artificial entities may be assigned high agency alongside low patiency (Akechi et al., 2018; Gray et al., 2007). Lee et al. (2019) found that manipulations of patiency significantly affected perceived agency but that the reverse was not true. Items that explicitly discuss both agency and moral consideration were included (e.g. Johnson and Miller, 2008; Laukyte, 2017).
There are many contributions to this topic in other, less formal formats, such as blog posts. Given the huge amount of such literature, we excluded such items to provide a more coherent, interpretable literature review. Other thresholds, such as expert authorship, seemed less promising.
If Bostrom (2014) is included, then only 29% of citations were of items produced in the US, compared to 51% in the UK.
Presumably, sentient subroutines (as discussed in Tomasik, 2011) would not have humanoid shape, though some sentient simulations could have a humanoid shape in their simulated environment.
In the identified results, these adjectives tended to be used alongside “artificial” (see, for example, San José et al., 2016), though this may reflect the search terms used in this literature review. These adjectives were not included in the search terms because initial exploration suggested that the vast majority of returned results were irrelevant to the focus of this literature review.
See the section on “Empirical research on attitudes towards the moral consideration of artificial entities.”
References
Abdullah, S. M. (2018). Intelligent robots and the question of their legal rights: An Islamic perspective. Islam and Civilisational Renewal ICR Journal, 9(3), 394–397.
Adam, A. (2008). Ethics for things. Ethics and Information Technology, 10(2–3), 149–154. https://doi.org/10.1007/s10676-008-9169-3
Akechi, H., Kikuchi, Y., Tojo, Y., Hakarino, K., & Hasegawa, T. (2018). Mind perception and moral judgment in autism. Autism Research, 11(9), 1239–1244. https://doi.org/10.1002/aur.1970
Aleksander, I. (1996). Impossible minds: My neurons, my consciousness. Imperial College Press. https://doi.org/10.1142/p023
Al-Fedaghi, S. S. (2007). Personal information ethics. In M. Quigley (Ed.), Encyclopedia of information ethics and security (pp. 513–519). IGI Global.
Allen, T., & Widdison, R. (1996). Can computers make contracts. Harvard Journal of Law and Technology, 9, 25–52.
Anderson, B. (2015). This guy wants to save robots from abusive humans. Vice. https://www.vice.com/en_us/article/vvbxj8/the-plan-to-protect-robots-from-human-cruelty.
Anderson, D. L. (2012). Machine intentionality, the moral status of machines, and the composition problem. Philosophy and theory of artificial intelligence (pp. 321–333). Springer.
Andreotta, A. J. (2020). The hard problem of AI rights. AI & Society. https://doi.org/10.1007/s00146-020-00997-x
Angel, L. (2019). How to build a conscious machine. Routledge.
Anthis, J. R., & Paez, E. (2021). Moral circle expansion: A promising strategy to impact the far future. Futures. 130102756. https://doi.org/10.1016/j.futures.2021.102756.
Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking inside the box: Controlling and using an oracle AI. Minds and Machines, 22(4), 299–324. https://doi.org/10.1007/s11023-012-9282-2.
Arnold, B. B., & Gough, D. (2017). Turing’s people: Personhood, artificial intelligence and popular culture. Canberra Law Review, 15, 1–37.
Asaro, P. M. (2001). Hans Moravec, robot. Mere machine to transcendent mind, New York, NY: Oxford University Press, Inc., 1999, ix + 227 pp., $25.00 (cloth), ISBN 0-19-511630-5. Minds and Machines, 11(1), 143–147. https://doi.org/10.1023/A:1011202314316.
Asekhauno, A., & Osemwegie, W. (2019). Genetic engineering, artificial intelligence, and natural man: An existential inquiry into being and right. Philosophical Investigations, 13(28), 181–193.
Ashrafian, H. (2015a). AIonAI: A humanitarian law of artificial intelligence and robotics. Science and Engineering Ethics, 21(1), 29–40. https://doi.org/10.1007/s11948-013-9513-9
Ashrafian, H. (2015b). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics, 21(2), 317–326. https://doi.org/10.1007/s11948-014-9541-0
Barfield, W. (2015). The law of looks and artificial bodies. Cyber-humans: Our future with machines (pp. 215–266). Cham: Springer. https://doi.org/10.1007/978-3-319-25050-2_7
Barfield, W. (2018). Liability for autonomous and artificially intelligent robots. Paladyn, Journal of Behavioral Robotics, 9(1), 193–203. https://doi.org/10.1515/pjbr-2018-0018
Barsanti, S. (2017). Saudi Arabia takes terrifying step to the future by granting a robot citizenship. A.V. Club. https://www.avclub.com/saudi-arabia-takes-terrifying-step-to-the-future-by-gra-1819888111
Bartneck, C., & Hu, J. (2008). Exploring the abuse of robots. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 9(3), 415–433. https://doi.org/10.1075/is.9.3.04bar
Bartneck, C., & Keijsers, M. (2020). The morality of abusing a robot. Paladyn, Journal of Behavioral Robotics, 11(1), 271–283. https://doi.org/10.1515/pjbr-2020-0017
Basl, J. (2013a). The ethics of creating artificial consciousness. https://philarchive.org/archive/BASTEO-11
Basl, J. (2013b). What to do about artificial consciousnesses. In R. L. Sandler (Ed.), Ethics and emerging technologies. Palgrave Macmillan.
Basl, J. (2014). Machines as moral patients we shouldn’t care about (yet): The interests and welfare of current machines. Philosophy & Technology, 27(1), 79–96. https://doi.org/10.1007/s13347-013-0122-y
Baum, S. D., Armstrong, S., Ekenstedt, T., Häggström, O., Hanson, R., Kuhlemann, K., et al. (2019). Long-term trajectories of human civilization. Foresight, 21(1), 53–83. https://doi.org/10.1108/FS-04-2018-0037
Beckers, S. (2018). AAAI: An argument against artificial intelligence. In V. C. Müller (Ed.), Philosophy and theory of artificial intelligence 2017 (Vol. 44, pp. 235–247). Cham: Springer. https://doi.org/10.1007/978-3-319-96448-5_25
Belk, R. (2018). Ownership: The extended self and the extended object. In J. Peck & S. B. Shu (Eds.), Psychological ownership and consumer behavior (pp. 53–67). Cham: Springer. https://doi.org/10.1007/978-3-319-77158-8_4
Bennett, B., & Daly, A. (2020). Recognising rights for robots: Can we? Will we? Should we? Law, Innovation and Technology, 12(1), 60–80. https://doi.org/10.1080/17579961.2020.1727063
Beno, M. (2019). Robot rights in the era of robolution and the acceptance of robots from the Slovak citizen’s perspective. In 2019 IEEE International symposium on robotic and sensors environments (ROSE) (pp. 1–7). Presented at the 2019 IEEE international symposium on robotic and sensors environments (ROSE), Ottawa, ON, Canada: IEEE. https://doi.org/10.1109/ROSE.2019.8790429
Bess, M. (2018). Eight kinds of critters: A moral taxonomy for the twenty-second century. The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 43(5), 585–612. https://doi.org/10.1093/jmp/jhy018
Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368. https://doi.org/10.1016/j.tics.2019.02.008
Biondi, Z. (2019). Machines and non-identity problems. Journal of Evolution and Technology, 29(2), 12–25.
Birhane, A., & van Dijk, J. (2020). Robot rights?: Let’s talk about human welfare instead. In Proceedings of the AAAI/ACM conference on AI, ethics, and society (pp. 207–213). Presented at the AIES ’20: AAAI/ACM conference on AI, ethics, and society. ACM. https://doi.org/10.1145/3375627.3375855
Birmingham, W. (2008). Towards an understanding of artificial intelligence and its application to ethics. In 2008 Annual conference & exposition proceedings (pp. 13.1294.1–13.1294.10). Presented at the 2008 annual conference & exposition, Pittsburgh, Pennsylvania: ASEE conferences. https://doi.org/10.18260/1-2--3972
Blackmore, S. J. (1999). Meme machines and consciousness. Journal of Intelligent Systems. https://doi.org/10.1515/JISYS.1999.9.5-6.355
Bolonkin, A. (2012). What Is ‘I’? What are ‘We’? Universe, human immortality and future human evaluation (pp. 43–51). Elsevier.
Bostrom, N., Dafoe, A., & Flynn, C. (2016). Policy desiderata for superintelligent AI: A vector field approach. https://www.fhi.ox.ac.uk/wp-content/uploads/Policy-Desiderata-in-the-Development-of-Machine-Superintelligence.pdf
Bostrom, N. (2003). Astronomical waste: The opportunity cost of delayed technological development. Utilitas, 15(3), 308–314. https://doi.org/10.1017/S0953820800004076
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Brey, P., & Søraker, J. H. (2009). Philosophy of computing and information technology. In D. M. Gabbay, P. Thagard, J. Woods, & A. W. M. Meijers (Eds.), Philosophy of technology and engineering sciences (pp. 1341–1407). Oxford: Elsevier.
Briggs, G., Gessell, B., Dunlap, M., & Scheutz, M. (2014). Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? In The 23rd IEEE international symposium on robot and human interactive communication (pp. 1122–1127). Presented at the 2014 RO-MAN: The 23rd IEEE international symposium on robot and human interactive communication. IEEE. https://doi.org/10.1109/ROMAN.2014.6926402
Briggs, G. (2015). Overselling: Is appearance or behavior more problematic? http://www.openroboethics.org/hri15/wp-content/uploads/2015/02/Mf-Briggs.pdf
Bringsjord, S., Licato, J., Govindarajulu, N. S., Ghosh, R., & Sen, A. (2015). Real robots that pass human tests of self-consciousness. In 2015 24th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 498–504). Presented at the 2015 24th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE. https://doi.org/10.1109/ROMAN.2015.7333698
British Broadcasting Corporation. (2006). Robots could demand legal rights. http://news.bbc.co.uk/1/hi/technology/6200005.stm
Broman, M. M., & Finckenberg-Broman, P. (2018). Socio-economic and legal impact of autonomous robotics and AI entities: The RAiLE project. IEEE Technology and Society Magazine, 37(1), 70–79. https://doi.org/10.1109/MTS.2018.2795120
Browne, R. (2017). World’s first robot ‘citizen’ Sophia is calling for women’s rights in Saudi Arabia. CNBC. https://www.cnbc.com/2017/12/05/hanson-robotics-ceo-sophia-the-robot-an-advocate-for-womens-rights.html
Bryson, J. J. (2012). Patiency is not a virtue: Suggestions for co-constructing an ethical framework including intelligent artefacts. In D. J. Gunkel, J. J. Bryson, & S. Torrance (Eds.), The machine question: AI, ethics, and moral responsibility (pp. 73–77). Presented at the AISB/IACAP world congress 2012. AISB. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.446.9723&rep=rep1&type=pdf#page=93
Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Natural language processing (Vol. 8, pp. 63–74). John Benjamins Publishing Company.
Bryson, J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26. https://doi.org/10.1007/s10676-018-9448-6
Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal Lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9
Buttazzo, G. (2001). Artificial consciousness: Utopia or real possibility? Computer, 34(7), 24–30. https://doi.org/10.1109/2.933500
Calo, R. (2016). Robots in American Law. http://www.maximusveritas.com/wp-content/uploads/2016/03/Robot-Law.pdf
Calverley, D. J. (2011). Legal rights for machines: Some fundamental concepts. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 213–227). Cambridge University Press.
Cappuccio, M. L., Peeters, A., & McDonald, W. (2020). Sympathy for dolores: Moral consideration for robots based on virtue and recognition. Philosophy & Technology, 33(1), 9–31. https://doi.org/10.1007/s13347-019-0341-y
Cave, S., Nyrup, R., Vold, K., & Weller, A. (2019). Motivations and risks of machine ethics. Proceedings of the IEEE, 107(3), 562–574. https://doi.org/10.1109/JPROC.2018.2865996
Celotto, A. (2019). I Robot Possono Avere Diritti? BioLaw Journal - Rivista Di BioDiritto, 15(1), 91–99. https://doi.org/10.15168/2284-4503-353
Center on Long-Term Risk. (2020). About us. https://longtermrisk.org/about-us
Čerka, P., Grigienė, J., & Sirbikytė, G. (2017). Is it possible to grant legal personality to artificial intelligence software systems? Computer Law & Security Review, 33(5), 685–699. https://doi.org/10.1016/j.clsr.2017.03.022
Chernyak, N., & Gary, H. E. (2016). Children’s cognitive and behavioral reactions to an autonomous versus controlled social robot dog. Early Education and Development, 27(8), 1175–1189. https://doi.org/10.1080/10409289.2016.1158611
Chesterman, S. (2020). Artificial intelligence and the limits of legal personality. International and Comparative Law Quarterly, 69(4), 819–844. https://doi.org/10.1017/S0020589320000366
Chinen, M. A. (2016). The co-evolution of autonomous machines and legal responsibility. Virginia Journal of Law and Technology Association, 20(2), 338–393.
Chomanski, B. (2019). What’s wrong with designing people to serve? Ethical Theory and Moral Practice, 22(4), 993–1015. https://doi.org/10.1007/s10677-019-10029-3
Chopra, S. (2010). Rights for autonomous artificial agents? Communications of the ACM, 53(8), 38–40. https://doi.org/10.1145/1787234.1787248
Chrisley, R. (2008). Philosophical foundations of artificial consciousness. Artificial Intelligence in Medicine, 44(2), 119–137. https://doi.org/10.1016/j.artmed.2008.07.011
Church, G. M. (2019). The rights of machines. In J. Brockman (Ed.), Possible minds: Twenty-five ways of looking at AI (pp. 240–253). Penguin Books.
Cioppa, T. M., Lucas, T. W., & Sanchez, S. M. (2004). Military applications of agent-based simulations. In Proceedings of the 2004 winter simulation conference, 2004. (Vol. 1, pp. 165–174). Presented at the 2004 winter simulation conference. IEEE. https://doi.org/10.1109/WSC.2004.1371314
Coeckelbergh, M. (2013). David J. Gunkel: The machine question: Critical perspectives on AI, robots, and ethics: MIT Press, 2012, 272 pp, ISBN-10: 0-262-01743-1, ISBN-13: 978-0-262-01743-5. Ethics and Information Technology, 15(3), 235–238. https://doi.org/10.1007/s10676-012-9305-y
Coeckelbergh, M. (2010a). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241. https://doi.org/10.1007/s10676-010-9221-y
Coeckelbergh, M. (2010b). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5
Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-cartesian moral hermeneutics. Philosophy & Technology, 27(1), 61–77. https://doi.org/10.1007/s13347-013-0133-8
Coeckelbergh, M. (2018). Why care about robots? Empathy, moral standing, and the language of suffering. Kairos Journal of Philosophy & Science, 20(1), 141–158. https://doi.org/10.2478/kjps-2018-0007
Coeckelbergh, M. (2020). AI ethics. The MIT Press. https://doi.org/10.7551/mitpress/12549.001.0001
Campbell Collaboration. (2014). Campbell collaboration systematic reviews: Policies and guidelines. https://doi.org/10.4073/cpg.2016.1
Craig, M. J., Edwards, C., Edwards, A., & Spence, P. R. (2019). Impressions of message compliance-gaining strategies for considering robot rights. In 2019 14th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 560–561). Presented at the 2019 14th ACM/IEEE international conference on human–robot interaction (HRI). IEEE. https://doi.org/10.1109/HRI.2019.8673117
Create Digital. (2018). Do robots have rights? Here’s what 10 people and 1 robot have to say. https://www.createdigital.org.au/robots-rights-10-people-one-robot-say/
Dall’Agnol, D. (2020). Human and nonhuman rights. Revista De Filosofia Aurora. https://doi.org/10.7213/1980-5934.32.055.DS01
Damholdt, M. F., Vestergaard, C., Nørskov, M., Hakli, R., Larsen, S., & Seibt, J. (2020). Towards a new scale for assessing attitudes towards social robots: The attitudes towards social robots scale (ASOR). Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 21(1), 24–56. https://doi.org/10.1075/is.18055.fle
Danaher, J. (2020). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26(4), 2023–2049. https://doi.org/10.1007/s11948-019-00119-x
Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In R. Calo, A. Froomkin, & I. Kerr (Eds.), Robot law (pp. 213–232). Edward Elgar Publishing.
Davidson, R., Sommer, K., & Nielsen, M. (2019). Children’s judgments of anti-social behaviour towards a robot: Liking and learning. In 2019 14th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 709–711). Presented at the 2019 14th ACM/IEEE international conference on human-robot interaction (HRI). IEEE. https://doi.org/10.1109/HRI.2019.8673075
Davies, C. R. (2011). An evolutionary step in intellectual property rights—Artificial intelligence and intellectual property. Computer Law & Security Review, 27(6), 601–619. https://doi.org/10.1016/j.clsr.2011.09.006
Dawes, J. (2020). Speculative human rights: Artificial intelligence and the future of the human. Human Rights Quarterly, 42(3), 573–593. https://doi.org/10.1353/hrq.2020.0033
de Graaf, M. M. A., & Malle, B. F. (2019). People’s explanations of robot behavior subtly reveal mental state inferences. In 2019 14th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 239–248). Presented at the 2019 14th ACM/IEEE international conference on human-robot interaction (HRI). IEEE. https://doi.org/10.1109/HRI.2019.8673308
DiPaolo, A. (2019). If androids dream, are they more than sheep?: Westworld, robots and legal rights. Dialogue: The Interdisciplinary Journal of Popular Culture and Pedagogy, 6(2).
Dixon, E. (2015). Constructing the identity of AI: A discussion of the AI debate and its shaping by science fiction. Leiden University. Retrieved from https://openaccess.leidenuniv.nl/bitstream/handle/1887/33582/Elinor%20Dixon%20BA%20Thesis%20Final.pdf
Dracopoulou, S. (2003). The ethics of creating conscious robots—Life, personhood and bioengineering. Journal of Health, Social and Environmental Issues, 4(2), 47–50.
Drozdek, A. (1994). To ‘the possibility of computers becoming persons’ (1989). Social Epistemology, 8(2), 177–197. https://doi.org/10.1080/02691729408578742
Drozdek, A. (2017). Ethics and intelligent systems. Idea. Studia Nad Strukturą i Rozwojem Pojęć Filozoficznych, 1(29), 265–274.
Elder, A. M. (2017). Friendship, robots, and social media: False friends and second selves. Routledge. https://doi.org/10.4324/9781315159577
Erhardt, J., & Mona, M. (2016). Rechtsperson Roboter – Philosophische Grundlagen für den rechtlichen Umgang mit künstlicher Intelligenz. In S. Gless & K. Seelmann (Eds.), Intelligente Agenten und das Recht (pp. 61–94). Nomos Verlagsgesellschaft mbH & Co. KG. https://doi.org/10.5771/9783845280066-61
Estrada, D. (2018). Value alignment, fair play, and the rights of service robots. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 102–107). Presented at the AIES ’18: AAAI/ACM conference on AI, ethics, and Society. ACM. https://doi.org/10.1145/3278721.3278730
Estrada, D. (2020). Human supremacy as posthuman risk. Journal of Sociotechnical Critique, 1(1), 1–40. https://doi.org/10.25779/J5PS-DY87
European Parliament Committee on Legal Affairs. (2017). Report with recommendations to the commission on civil law rules on robotics (No. 2015/2103(INL)). https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
Fagan, F. (2019). Toward a public choice theory of legal rights for artificial intelligence. Presented at the 2019 convention of the society for the study of artificial intelligence and the simulation of behaviour, AISB 2019. http://aisb2019.falmouthgamesacademy.com/wp-content/uploads/2019/04/AIRoNoS2019-_-proceedings.pdf
Floridi, L. (1999). Information ethics: On the philosophical foundations of computer ethics. Ethics and Information Technology, 1(1), 33–52. https://doi.org/10.1023/A:1010018611096
Floridi, L. (2002). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4(4), 287–304. https://doi.org/10.1023/A:1021342422699
Floridi, L. (2005). Information ethics, its nature and scope. ACM SIGCAS Computers and Society, 35(2), 21–36. https://doi.org/10.1145/1111646.1111649
Fox, A. Q. (2018). On empathy and alterity: How sex robots encourage us to reconfigure moral status. University of Twente. Retrieved from http://essay.utwente.nl/75110/1/Fox_MA_BMS.pdf
Francken, J., Beerendonk, L., Molenaar, D., Fahrenfort, J. J., Kiverstein, J., Seth, A., & van Gaal, S. (2021). An academic survey on theoretical foundations, common assumptions and the current state of the field of consciousness science. PsyArXiv Preprint. https://doi.org/10.31234/osf.io/8mbsk
Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305–323. https://doi.org/10.1007/s10506-017-9212-y
Franklin, S. (2003). A conscious artifact? Journal of Consciousness Studies, 10(4–5), 47–66.
Fraune, M. R., Sabanovic, S., & Smith, E. R. (2017). Teammates first: Favoring ingroup robots over outgroup humans. In 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 1432–1437). Presented at the 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE. https://doi.org/10.1109/ROMAN.2017.8172492
Freier, N. G. (2008). Children attribute moral standing to a personified agent. In Proceeding of the twenty-sixth annual CHI conference on human factors in computing systems - CHI ’08 (p. 343). Presented at the proceeding of the twenty-sixth annual CHI conference. ACM Press. https://doi.org/10.1145/1357054.1357113
Freitas, R. A. (1985). The legal rights of robots. Student Lawyer, 13(1), 54–56.
Friedman, C. (2019). Ethical boundaries for android companion robots: A human perspective. https://pdfs.semanticscholar.org/d96f/6b2ad8c596edb56538a78f6895530389493d.pdf
Galanter, P. (2020). Towards ethical relationships with machines that make art. Artnodes. https://doi.org/10.7238/a.v0i26.3371
Gamez, P., Shank, D. B., Arnold, C., & North, M. (2020). Artificial virtue: The machine question and perceptions of moral character in artificial moral agents. AI & Society, 35(4), 795–809. https://doi.org/10.1007/s00146-020-00977-1
Gellers, J. C. (2020). Rights for robots: Artificial intelligence, animal and environmental law (1st ed.). Routledge. https://doi.org/10.4324/9780429288159
Gerdes, A. (2015). IT-ethical issues in Sci-Fi film within the timeline of the ethicomp conference series. Journal of Information, Communication and Ethics in Society, 13(3/4), 314–325. https://doi.org/10.1108/JICES-10-2014-0048
Gerdes, A. (2016). The issue of moral consideration in robot ethics. ACM SIGCAS Computers and Society, 45(3), 274–279. https://doi.org/10.1145/2874239.2874278
Gittinger, J. L. (2019). Ethics and AI. Personhood in science fiction (pp. 109–143). Springer. https://doi.org/10.1007/978-3-030-30062-3_5
Gloor, L. (2016a). Altruists should prioritize artificial intelligence. Center on Long-Term Risk. https://longtermrisk.org/altruists-should-prioritize-artificial-intelligence/#VII_Artificial_sentience_and_risks_of_astronomical_suffering
Gloor, L. (2016b). Suffering-focused AI safety: In Favor of ‘Fail-Safe’ measures. Center on Long-Term Risk. https://longtermrisk.org/files/fail-safe-ai.pdf
Gordon, J.-S. (2020). What do we owe to intelligent robots? AI & Society, 35(1), 209–223. https://doi.org/10.1007/s00146-018-0844-6
Gordon, J.-S., & Pasvenskiene, A. (2021). Human rights for robots? A literature review. AI and Ethics. https://doi.org/10.1007/s43681-021-00050-7
Granic, I., Lobel, A., & Engels, R. C. M. E. (2014). The benefits of playing video games. American Psychologist, 69(1), 66–78. https://doi.org/10.1037/a0034857
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619–619. https://doi.org/10.1126/science.1134475
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130. https://doi.org/10.1016/j.cognition.2012.06.007
Gregory, T. (2012). Killing machines. University of Tasmania. Retrieved from https://eprints.utas.edu.au/15841/2/whole.pdf
Gualeni, S. (2020). Artificial beings worthy of moral consideration in virtual environments: An analysis of ethical viability. Journal for Virtual Worlds Research. https://doi.org/10.4101/jvwr.v13i1.7369
Gunkel, D. J. (2013). Mark Coeckelbergh: Growing moral relations: critique of moral status ascription: Palgrave Macmillan, New York, 2012, 239 pp, ISBN: 978-1-137-02595-1. Ethics and Information Technology, 15(3), 239–241. https://doi.org/10.1007/s10676-012-9308-8
Gunkel, D. J., & Cripe, B. (2014). Apocalypse not, or how i learned to stop worrying and love the machine. Kritikos: An International and Interdisciplinary Journal of Postmodern Cultural Sound, Text and Image, 11. https://intertheory.org/gunkel-cripe.htm
Gunkel, D. J. (2019a). No brainer: Why consciousness is neither a necessary nor sufficient condition for AI ethics. Presented at the AAAI spring symposium: Towards conscious AI systems. http://ceur-ws.org/Vol-2287/paper9.pdf
Gunkel, D. J. (2019b). The rights of (killer) robots. http://gunkelweb.com/articles/gunkel_rights_killer_robots2019.pdf
Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9(3), 165–177. https://doi.org/10.1007/s10676-007-9137-3
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. The MIT Press. https://doi.org/10.7551/mitpress/8975.001.0001
Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113–132. https://doi.org/10.1007/s13347-013-0121-z
Gunkel, D. J. (2015). The rights of machines: Caring for robotic care-givers. In S. P. van Rysewyk & M. Pontier (Eds.), Machine medical ethics (Vol. 74, pp. 151–166). Springer. https://doi.org/10.1007/978-3-319-08108-3_10
Gunkel, D. J. (2018a). Robot rights. The MIT Press. https://doi.org/10.7551/mitpress/11444.001.0001
Gunkel, D. J. (2018b). The machine question: Can or should machines have rights? In B. Vanacker & D. Heider (Eds.), Ethics for a digital age. (Vol. II). Peter Lang.
Gunkel, D. J. (2018c). The other question: Can and should robots have rights? Ethics and Information Technology, 20(2), 87–99. https://doi.org/10.1007/s10676-017-9442-4
Gunkel, D. J. (2018d). Can machines have rights? In T. J. Prescott, N. Lepora, & P. F. M. J. Verschure (Eds.), Living machines: A handbook of research in biomimetic and biohybrid systems (pp. 596–601). Oxford University Press.
Gunkel, D. J. (2020a). Shifting perspectives. Science and Engineering Ethics, 26(5), 2527–2532. https://doi.org/10.1007/s11948-020-00247-9
Gunkel, D. J. (2020b). The right(s) question: Can and should robots have rights? In B. P. Goecke & A. M. Rosenthal-von der Pütten (Eds.), Artificial intelligence: Reflections in philosophy, theology, and the social sciences (pp. 255–274). Mentis Verlag. https://doi.org/10.30965/9783957437488_017
Hagendorff, T. (2020). Animal rights and robot ethics. In Robotic systems: Concepts, methodologies, tools, and applications (pp. 1812–1823). Hershey, PA: IGI Global. https://doi.org/10.4018/978-1-7998-1754-3
Haikonen, P. O. (2012). Consciousness and robot sentience. World Scientific.
Hale, B. (2009). Technology, the environment and the moral considerability of artefacts. In J. K. B. Olsen, E. Selinger, & S. Riis (Eds.), New waves in philosophy of technology (pp. 216–240). Palgrave Macmillan.
Hall, L. (2005). Inflicting pain on synthetic characters: Moral concerns and empathic interaction. In Proceedings of the joint symposium on virtual social agents (pp. 144–149). The University of Hertfordshire.
Hallqvist, J. (2018). Negotiating humanity: Anthropomorphic robots in the swedish television series Real Humans. Science Fiction Film & Television, 11(3), 449–467. https://doi.org/10.3828/sfftv.2018.26
Hanák, P. (2019). Umělá inteligence – práva a odpovědnost. Masarykova univerzita. Retrieved from https://is.muni.cz/th/k6yn0/Hanak_magisterska_prace.pdf
Hanson Robotics. (2018). Sophia. https://www.hansonrobotics.com/sophia/.
Harnad, S. (2003). Can a machine be conscious? How? Journal of Consciousness Studies, 10(4–5), 69–75.
Hartmann, T. (2017). The ‘moral disengagement in violent videogames’ model. Game Studies, 17(2).
Hess, J. L., & Fore, G. (2017). A systematic literature review of US engineering ethics interventions. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9910-6
Higgins, J. P., & Green, S. (Eds.). (2008). Cochrane handbook for systematic reviews of interventions. Wiley. https://doi.org/10.1002/9780470712184
Himma, K. E. (2003). The relationship between the uniqueness of computer ethics and its independence as a discipline in applied ethics. Ethics and Information Technology, 5(4), 225–237. https://doi.org/10.1023/B:ETIN.0000017733.41586.34
Himma, K. E. (2004). There’s something about mary: The moral value of things qua information objects. Ethics and Information Technology, 6(3), 145–159. https://doi.org/10.1007/s10676-004-3804-4
Hoffmann, C. H., & Hahn, B. (2020). Decentered ethics in the machine era and guidance for AI regulation. AI & Society, 35(3), 635–644. https://doi.org/10.1007/s00146-019-00920-z
Hogan, K. (2017). Is the machine question the same question as the animal question? Ethics and Information Technology, 19(1), 29–38. https://doi.org/10.1007/s10676-017-9418-4
Holder, C., Khurana, V., Hook, J., Bacon, G., & Day, R. (2016). Robotics and law: key legal and regulatory implications of the robotics age (part II of II). Computer Law & Security Review, 32(4), 557–576. https://doi.org/10.1016/j.clsr.2016.05.011
Holland, O. (2007). A strongly embodied approach to machine consciousness. Journal of Consciousness Studies, 14(7), 97–110.
Holm, S., & Powell, R. (2013). Organism, machine, artifact: The conceptual and normative challenges of synthetic biology. Studies in History and Philosophy of Science Part c: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 627–631. https://doi.org/10.1016/j.shpsc.2013.05.009
Holy-Luczaj, M., & Blok, V. (2019). Hybrids and the boundaries of moral considerability or revisiting the idea of non-instrumental value. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00380-9
Hu, Y. (2018). Robot criminal liability revisited. In S. Y. Jin, H. H. Sang, & J. A. Seong (Eds.), Dangerous ideas in law (pp. 494–509). Bobmunsa. https://papers.ssrn.com/abstract=3237352
Hughes, J. J. (2005). Report on the 2005 interests and beliefs survey of the members of the world transhumanist association (p. 16). World Transhumanist Association.
Huttunen, A., Kulovesi, J., Brace, W., Lechner, L. G., Silvennoinen, K., & Kantola, V. (2010). Liberating intelligent machines with financial instruments. Nordic Journal of Commercial Law, (2). https://journals.aau.dk/index.php/NJCL/article/view/3015
Inayatullah, S. (2001). The rights of robot: Inclusion, courts and unexpected futures. Journal of Future Studies, 6(2), 93–102.
Inyashkin, S. G. (2016). Civil rights implications in Asimov’s science fiction. In Writing identity: The construction of national identity in American Literature (pp. 22–25). https://www.elibrary.ru/item.asp?id=26618840
Jack, A. I., Dawson, A. J., & Norr, M. E. (2013). Seeing human: Distinct and overlapping neural signatures associated with two forms of dehumanization. NeuroImage, 79, 313–328. https://doi.org/10.1016/j.neuroimage.2013.04.109
Jackson Jr., P. C. (2018a). Postscript for ‘beneficial human-level AI… and beyond’. http://www.talamind.prohosting.com/JacksonPostscriptForBeneficialHumanLevelAIandBeyond20180418.pdf
Jackson Jr., P. C. (2018b). Toward beneficial human-level AI… and beyond. Presented at the 2018 AAAI spring symposium series. https://www.aaai.org/ocs/index.php/SSS/SSS18/paper/viewFile/17450/15374
Jackson, R. B., & Williams, T. (2019). On perceived social and moral agency in natural language capable robots (pp. 401–410). Presented at the 2019 HRI workshop on the dark side of human-robot interaction.
Jaynes, T. L. (2020). Legal personhood for artificial intelligence: Citizenship as the exception to the rule. AI & Society, 35(2), 343–354. https://doi.org/10.1007/s00146-019-00897-9
Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133. https://doi.org/10.1007/s10676-008-9174-6
Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291–301. https://doi.org/10.1007/s10676-018-9481-5
Jowitt, J. (2020). Assessing contemporary legislative proposals for their compatibility with a natural law case for AI legal personhood. AI & Society. https://doi.org/10.1007/s00146-020-00979-z
Kaminska, K. (2016). Rights for robots: Future or (Science) Fiction? In Maastricht European private law institute working paper 2016/hors series. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2734079
Kaufman, F. (1994). Machines, sentience, and the scope of morality. Environmental Ethics, 16(1), 57–70. https://doi.org/10.5840/enviroethics199416142
Kelley, D., & Atreides, K. (2020). AGI protocol for the ethical treatment of artificial general intelligence systems. Procedia Computer Science, 169, 501–506. https://doi.org/10.1016/j.procs.2020.02.219
Khoury, A. (2016). Intellectual property rights for hubots: On the legal implications of human-like robots as innovators and creators. Cardozo Arts and Entertainment Law Journal, 35, 635–668.
Kim, J., & Petrina, S. (2006). Artificial life rights: Facing moral dilemmas through the sims. Educational Insights, 10(2), 84–94.
Kiršienė, J., Gruodytė, E., & Amilevičius, D. (2020). From computerised thing to digital being: Mission (Im)possible? AI & Society. https://doi.org/10.1007/s00146-020-01051-6
Klein, W. E. J. (2016). Robots make ethics honest: And vice versa. ACM SIGCAS Computers and Society, 45(3), 261–269. https://doi.org/10.1145/2874239.2874276
Klein, W. E. J. (2019). Exceptionalisms in the ethics of humans, animals and machines. Journal of Information, Communication and Ethics in Society, 17(2), 183–195. https://doi.org/10.1108/JICES-11-2018-0089
Klein, W. E. J., & Lin, V. W. (2018). ‘Sex robots’ revisited: A reply to the campaign against sex robots. ACM SIGCAS Computers and Society, 47(4), 107–121. https://doi.org/10.1145/3243141.3243153
Kljajić, F. (2019). Etičko razmatranje moralnog statusa umjetno inteligentnih sustava. University of Zadar. Retrieved from https://zir.nsk.hr/islandora/object/unizd:3124/datastream/PDF/download
Kolling, T., Baisch, S., Schall, A., Selic, S., Rühl, S., Kim, Z., et al. (2016). What is emotional about emotional robotics? In S. Y. Tettegah (Ed.), Emotions, technology, and health (pp. 85–103). Elsevier. https://doi.org/10.1016/B978-0-12-801737-1.00005-6
Kovic, M. (2020). Risks of space colonization. arXiv preprint. https://doi.org/10.31235/osf.io/hj4f2
Krämer, C. (2020). Can robots have dignity? In B. P. Goecke & A. M. Rosenthal-von der Pütten (Eds.), Artificial intelligence: Reflections in philosophy, theology, and the social sciences (pp. 241–253). Mentis Verlag. https://doi.org/10.30965/9783957437488_016
Krebs, S. (2006). On the anticipation of ethical conflicts between humans and robots in Japanese Mangas. International Review of Information Ethics, 6, 63–68.
Kunnari, A. (2020). Lore’s moral patiency and agency in star trek: The next generation. Tampere University. Retrieved from https://trepo.tuni.fi/bitstream/handle/10024/119146/KunnariAnni.pdf
Kuran, E. K. (2020). The moral status of AI: What do we owe to intelligent machines? A Review. NU Writing, (11). https://openjournals.neu.edu/nuwriting/home/article/view/177. Accessed 3 December 2020.
Küster, D., & Świderska, A. (2016). Moral patients: What drives the perceptions of moral actions towards humans and robots? In What social robots can and should do: Proceedings of robophilosophy 2016/TRANSOR 2016. IOS Press. https://doi.org/10.3233/978-1-61499-708-5-340
Küster, D., & Swiderska, A. (2020). Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes. International Journal of Psychology. https://doi.org/10.1002/ijop.12715
Küster, D., Swiderska, A., & Gunkel, D. (2020). I saw it on YouTube! How online videos shape perceptions of mind, morality, and fears about robots. New Media & Society. https://doi.org/10.1177/1461444820954199
Laukyte, M. (2017). Artificial agents among us: Should we recognize them as agents proper? Ethics and Information Technology, 19(1), 1–17. https://doi.org/10.1007/s10676-016-9411-3
Laukyte, M. (2019). Against human exceptionalism: environmental ethics and the machine question. In D. Berkich & M. V. d’Alfonso (Eds.), On the cognitive, ethical, and scientific dimensions of artificial intelligence (Vol. 134, pp. 325–339). Springer. https://doi.org/10.1007/978-3-030-01800-9_18
Laukyte, M. (2020). Robots: Regulation, rights, and remedies. In M. Jackson & M. Shelly (Eds.), Legal regulations, implications, and issues surrounding digital data: Hershey. IGI Global.
Laulhe-Shaelou, S. (2019). SIS and rights, including robot rights. In Current human rights frameworks. http://clok.uclan.ac.uk/29816/1/29816%20D1.5%20Current%20human%20rights%20frameworks.pdf
Lavi, L. (2019). Stretching personhood beyond humans: What recent discussions on animal rights can teach us onthe ethical and political treatment of robots. In S. S. Gouveia & M. Curado (Eds.), Automata’s inner movie: Science and philosophy of mind (pp. 297–312). Vernon Press.
Lee, M., Lucas, G., Mell, J., Johnson, E., & Gratch, J. (2019). What’s on your virtual mind?: Mind perception in human-agent negotiations. In Proceedings of the 19th ACM international conference on intelligent virtual agents (pp. 38–45). Presented at the IVA ’19: ACM international conference on intelligent virtual agents. ACM. https://doi.org/10.1145/3308532.3329465
Leenes, R., & Lucivero, F. (2014). Laws on robots, laws by robots, laws in robots: Regulating robot behaviour by design. Law, Innovation and Technology, 6(2), 193–220. https://doi.org/10.5235/17579961.6.2.193
Lehman-Wilzig, S. N. (1981). Frankenstein unbound: Towards a legal definition of artificial intelligence. Futures, 13(6), 442–457. https://doi.org/10.1016/0016-3287(81)90100-2
Lender, L. (2016). Weighing the moral interests of AI.
Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1(3), 209–216. https://doi.org/10.1007/s12369-009-0022-6
Levy, D. (2012). The ethics of robot prostitutes. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 223–232). MIT Press.
Levy, D. (2016). Why not marry a robot? In A. D. Cheok, K. Devlin, & D. Levy (Eds.), Love and sex with robots (Vol. 10237, pp. 3–13). Springer. https://doi.org/10.1007/978-3-319-57738-8_1
Lima, G. C., Sungkyu, P., & Meeyoung, C. (2019). Robots for class president: Children’s positions toward AI Robot. https://thegcamilo.github.io/assets/KCC_AIRights_20190605_Submission.pdf
Lima, G., Kim, C., Ryu, S., Jeon, C., & Cha, M. (2020). Collecting the public perception of AI and robot rights. arXiv preprint. http://arxiv.org/abs/2008.01339
Lima G., Kim, C., Ryu, S., Jeon, C., & Cha, M. (2020). Collecting the public perception of AI and robot rights. arXiv preprint. http://arxiv.org/abs/2008.01339
Lin, P., Abney, K., & Bekey, G. (2011). Robot ethics: Mapping the issues for a mechanized world. Artificial Intelligence, 175(5–6), 942–949. https://doi.org/10.1016/j.artint.2010.11.026
Loh. (2019). Responsibility and robot ethics: A critical overview. Philosophies, 4(4), 58. https://doi.org/10.3390/philosophies4040058
Lopez-Mobilia, G. (2011). Development of anthropomorphism and moral concern for nonhuman entities. The University of Texas at Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-12-4911
Lupetti, M. L., Bendor, R., & Giaccardi, E. (2019). Robot citizenship: A design perspective. In DeSForM19 proceedings (1st ed.). PubPub. https://doi.org/10.21428/5395bc37.595d1e58
MacDorman, K. F., & Cowley, S. J. (2006). Long-term relationships as a benchmark for robot personhood. In ROMAN 2006—The 15th IEEE international symposium on robot and human interactive communication (pp. 378–383). Presented at the ROMAN 2006—The 15th IEEE international symposium on robot and human interactive communication. https://doi.org/10.1109/ROMAN.2006.314463
Mackenzie, R. (2014). Sexbots: replacements for sex workers? Ethical constraints on the design of sentient beings for utilitarian purposes. In Proceedings of the 2014 workshops on advances in computer entertainment conference-ACE ’14 workshops (pp. 1–8). Presented at the 2014 workshops. ACM Press. https://doi.org/10.1145/2693787.2693789
Mackenzie, R. (2020). Sexbots: Sex slaves, vulnerable others or perfect partners? In Information Resources Management Association (Ed.), Robotic systems: Concepts, Methodologies, tools, and applications. IGI Global.
Mackenzie, R. (2016). Sexbots: Avoiding seduction danger and exploitation. Iride, 2, 331–340. https://doi.org/10.1414/84255
Mackenzie, R. (2018). Sexbots: Customizing them to suit us versus an ethical duty to created sentient beings to minimize suffering. Robotics, 7(4), 70. https://doi.org/10.3390/robotics7040070
Mackenzie, R. (2020a). Sexbots: Drawing on tibetan buddhism and the tantric tradition. Journal of Future Robot Life, 1(1), 65–89. https://doi.org/10.3233/FRL-200003
Magnani, L. (2005). Technological artifacts as moral carriers and mediators. In Machine ethics, papers from AAAI fall symposium technical report FS-05-06 (pp. 62–69). https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-009.pdf
Magnani, L. (2007). Moral mediators: how artifacts make us moral. i-lex Scienze Giuridiche, Scienze Cognitive e Intelligenza Artificiale, 7. http://www.i-lex.it/articles/volume3/issue7/magnani.pdf
Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243–256. https://doi.org/10.1007/s10676-015-9367-8
Martín-Martín, A., Orduna-Malea, E., Thelwall, M., & Delgado-López-Cózar, E. (2019). Google scholar, web of science, and scopus: Which is best for me? https://blogs.lse.ac.uk/impactofsocialsciences/2019/12/03/google-scholar-web-of-science-and-scopus-which-is-best-for-me/
Massaro, T. M. (2018). Artificial intelligence and the first amendment. In W. Barfield & U. Pagallo (Eds.), Research handbook on the law of artificial intelligence (pp. 353–374). Edward Elgar Publishing. https://doi.org/10.4337/9781786439055.00024
Massaro, T. M., & Norton, H. (2015). Siri-ously? Free speech rights and artificial intelligence. Northwestern University Law Review, 110(5), 1169–1194.
Maza, C. (2017). Saudi Arabia gives citizenship to a non-Muslim, English-speaking robot. Newsweek. https://www.newsweek.com/saudi-arabia-robot-sophia-muslim-694152
Mazarian, A. R. (2019). Critical analysis of the “no relevant difference” argument in defense of the rights of artificial intelligences. Journal of Philosophical Theological Research, 21(79), 165–190. https://doi.org/10.22091/jptr-pfk.2019.3925.2023
McDermott, D. (2007). Artificial intelligence and consciousness. In P. D. Zelazo, M. Moscovitch, & E. Thompson (Eds.), The Cambridge handbook of consciousness (pp. 117–150). Cambridge University Press.
McLaughlin, B. P., & Rose, D. (2018). On the Matter of Robot Minds. https://doi.org/10.1093/oso/9780198815259.003.0012
McNally, P., & Inayatullah, S. (1988). The rights of robots: Technology, culture and law in the 21st century. Futures, 20(2), 119–136. https://doi.org/10.1016/0016-3287(88)90019-5
Mehlman, M., Berg, J. W., & Ray, S. (2017). Robot law. Case research paper series in legal studies. https://papers.ssrn.com/abstract=2908488
Merriam-Webster. (2008). Robot. https://www.merriam-webster.com/dictionary/robot.
Michalski, R. (2018). How to sue a robot. Utah Law Review, 5, 1021–1071.
Microsoft Asia News Center. (2017). AI in Japan: Boy bot’s big honor. https://news.microsoft.com/apac/2017/11/20/ai-japan-boy-bots-big-honor/
Mohorčich, J., Reese, J. (2019) Cell-cultured meat: Lessons from GMO adoption and resistance. Appetite 143104408-10.1016/j.appet.2019.104408
Miles, I. (1994). Body of glass. Futures, 26(5), 549–552. https://doi.org/10.1016/0016-3287(94)90137-6
Miller, K., Wolf, M. J., & Grodzinsky, F. (2015). Behind the mask: Machine morality. Journal of Experimental & Theoretical Artificial Intelligence, 27(1), 99–107. https://doi.org/10.1080/0952813X.2014.948315
Miller, L. F. (2015). Granting automata human rights: Challenge to a basis of full-rights privilege. Human Rights Review, 16(4), 369–391. https://doi.org/10.1007/s12142-015-0387-x
Mittelstadt, B. (2017). Ethics of the health-related internet of things: A narrative review. Ethics and Information Technology, 19(3), 157–175. https://doi.org/10.1007/s10676-017-9426-4
Mosakas, K. (2020). On the moral status of social robots: Considering the consciousness criterion. AI & Society. https://doi.org/10.1007/s00146-020-01002-1
Nakada, M. (2011). Japanese Seken-views on privacy and robots: Before and after March 11, 2011. In J. Mauger (Ed.), (pp. 208–221). Presented at the CEPE 2011: Crossing Boundaries. International Society for Ethics and Information Technology.
Nakada, M. (2012). Robots and privacy in Japanese, Thai and Chinese Cultures. In M. Strano, H. Hrachovec, F. Sudweeks, & C. Ess (Eds.), Proceedings cultural attitudes towards technology and communication (pp. 478–492). Murdoch University. http://sammelpunkt.philo.at/2180/1/478-492_Session%25207%2520-%2520Nakada_f.pdf
Navajas, J., Álvarez Heduan, F., Garrido, J. M., Gonzalez, P. A., Garbulsky, G., Ariely, D., & Sigman, M. (2019). Reaching consensus in polarized moral debates. Current Biology, 29(23), 4124-4129.e6. https://doi.org/10.1016/j.cub.2019.10.018
Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111. https://doi.org/10.1007/s13347-013-0114-y
Nijssen, S. R. R., Müller, B. C. N., van Baaren, R. B., & Paulus, M. (2019). Saving the robot or the human? Robots who feel deserve moral care. Social Cognition, 37(1), 41–56. https://doi.org/10.1521/soco.2019.37.1.41
Nill, A., & Schibrowsky, J. A. (2007). Research on marketing ethics: A systematic review of the literature. Journal of Macromarketing, 27(3), 256–273. https://doi.org/10.1177/0276146707304733
Nomura, T., Otsubo, K., & Kanda, T. (2018). Preliminary investigation of moral expansiveness for robots. In 2018 IEEE workshop on advanced robotics and its social impacts (ARSO) (pp. 91–96). Presented at the 2018 IEEE workshop on advanced robotics and its social impacts (ARSO). IEEE. https://doi.org/10.1109/ARSO.2018.8625717
Nomura, T., Kanda, T., & Yamada, S. (2019). Measurement of moral concern for robots. In 2019 14th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 540–541). Presented at the 2019 14th ACM/IEEE international conference on human-robot interaction (HRI). IEEE. https://doi.org/10.1109/HRI.2019.8673095
Nyholm, S. (2019). Other minds, other intelligences: The problem of attributing agency to machines. Cambridge Quarterly of Healthcare Ethics, 28(04), 592–598. https://doi.org/10.1017/S0963180119000537
Obodiac, E. (2012). Transgenics of the Citizen (I). Postmodern Culture. https://doi.org/10.1353/pmc.2012.0011
Olivera-La Rosa, A. (2018). Wrong outside, wrong inside: A social functionalist approach to the uncanny feeling. New Ideas in Psychology, 50, 38–47. https://doi.org/10.1016/j.newideapsych.2018.03.004
Open Letter to the European Commission Artificial Intelligence and Robotics. (2018). https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf
Pagallo, U. (2010). The human master with a modern slave? Some remarks on robotics, ethics, and the law. In M. Arias-Oliva, T. Torres-Coronas, S. Rogerson, & T. W. Bynum (Eds.), The “backwards, forwards and sideways” changes of ICT: Ethicomp 2010 (pp. 397–404). Universitat Rovira i Virgil. https://www.researchgate.net/publication/296976124_Proceedings_of_ETHICOMP_2010_The_backwards_forwards_and_sideways_changes_of_ICT
Pagallo, U. (2011). Killers, fridges, and slaves: A legal journey in robotics. AI & Society, 26(4), 347–354. https://doi.org/10.1007/s00146-010-0316-0
People for the Ethical Treatment of Reinforcement Learners. (2015). Mission. http://www.petrl.org/.
Petersen, S. (2007). The ethics of robot servitude. Journal of Experimental & Theoretical Artificial Intelligence, 19(1), 43–54. https://doi.org/10.1080/09528130601116139
Petersen, S. (2012). Designing people to serve. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 283–298). MIT Press.
Piazza, J., Landy, J. F., & Goodwin, G. P. (2014). Cruel nature: Harmfulness as an important, overlooked dimension in judgments of moral standing. Cognition, 131(1), 108–124. https://doi.org/10.1016/j.cognition.2013.12.013
Powers, T. M. (2013). On the moral agency of computers. Topoi, 32(2), 227–236. https://doi.org/10.1007/s11245-012-9149-4
Prescott, C. S. (2017). Robots are not just tools. Connection Science, 29(2), 142–149. https://doi.org/10.1080/09540091.2017.1279125
Puaschunder, J. M. (2019). Artificial intelligence evolution: On the virtue of killing in the artificial age. Scientia Moralitas - International Journal of Multidisciplinary Research, 4(1), 51–72. https://doi.org/10.2139/ssrn.3247401
Putman, H. (1964). Robots: Machines or artificially created life? The Journal of Philosophy, 61(21), 668. https://doi.org/10.2307/2023045
Rademeyer, L. B. (2017). Legal rights for robots by 2060? Knowledge Futures: Interdisciplinary Journal of Futures Studies, 1(1). https://research.usc.edu.au/discovery/fulldisplay/alma99451189902621/61USC_INST:ResearchRepository
Rainey, S. (2016). Friends, robots, citizens? ACM SIGCAS Computers and Society, 45(3), 225–233. https://doi.org/10.1145/2874239.2874271
Randerson, J. (2007). Forget robot rights, experts say, use them for public safety. https://www.theguardian.com/science/2007/apr/24/frontpagenews.uknews
Reese, J. (2018). The End of Animal Farming. Beacon Press.
Redan, B. (2014). Rights for robots! Ethics Quarterly, 98. https://search.informit.com.au/documentSummary;dn=897765004331538;res=IELAPA
Reiss, M. J. (2020). Robots as persons? Implications for moral education. Journal of Moral Education. https://doi.org/10.1080/03057240.2020.1763933
Reynolds, E. (2018). The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing. Wired. https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics
Richardson, K. (2016). Sex robot matters: Slavery, the prostituted, and the rights of machines. IEEE Technology and Society Magazine, 35(2), 46–53. https://doi.org/10.1109/MTS.2016.2554421
Richardson, K. (2019). The human relationship in the ethics of robotics: A call to Martin Buber’s I and Thou. AI & Society, 34(1), 75–82. https://doi.org/10.1007/s00146-017-0699-2
Risse, M. (2019). Human rights, artificial intelligence and heideggerian technoskepticism: The long (Worrisome?) view. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3339548
Robertson, J. (2014). Human rights versus robot rights: Forecasts from Japan. Critical Asian Studies, 46(4), 571–598. https://doi.org/10.1080/14672715.2014.960707
Rodogno, R. (2017). Social robots: Boundaries, potential, challenges. In M. Nørskov (Ed.), Social robots: Boundaries, potential, challenges (1st ed., pp. 39–56). Abingdon, UK: Routledge. https://doi.org/10.4324/9781315563084
Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5(1), 17–34. https://doi.org/10.1007/s12369-012-0173-8
Russell, A. C. B. (2009). Blurring the love lines: The legal implications of intimacy with machines. Computer Law & Security Review, 25(5), 455–463. https://doi.org/10.1016/j.clsr.2009.07.003
Sætra, H. S. (2019). Man and his fellow machines: An exploration of the elusive boundary between man and other beings. In F. Orban & E. StrandLarsen (Eds.), Discussing borders, escaping traps: Transdisciplinary and transspatial approaches (pp. 215–228). Münster: Waxmann Verlag GmbH. https://doi.org/10.31244/9783830990451
Saltz, J. S., & Dewar, N. (2019). Data science ethical considerations: A systematic literature review and proposed project framework. Ethics and Information Technology, 21(3), 197–208. https://doi.org/10.1007/s10676-019-09502-5
San José, D. G., Chung, D. C., Olsen, J. R., Lindhardtsen, J. Z. K., Bro, J. A., & Marckwardt, N. C. (2016). A philosophical approach to the control problem of artificial intelligence. https://core.ac.uk/reader/43033958
Sarathy, V., Arnold, T., & Scheutz, M. (2019). When exceptions are the norm: Exploring the role of consent in HRI. ACM Transactions on Human-Robot Interaction, 8(3), 1–21. https://doi.org/10.1145/3341166
Schafer, B. (2016). Closing Pandora’s box? The EU proposal on the regulation of robots. Pandora’s Box—the Journal of the Justice and the Law Society of the University of Queeensland, 19, 55–68.
Scheessele, M. R. (2018). A framework for grounding the moral status of intelligent machines. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 251–256). Presented at the AIES ’18: AAAI/ACM conference on AI, ethics, and society. ACM. https://doi.org/10.1145/3278721.3278743
Schmetkamp, S. (2020). Understanding A.I.—Can and should we empathize with robots? Review of Philosophy and Psychology, 11(4), 881–897. https://doi.org/10.1007/s13164-020-00473-x
Schwitzgebel, E., & Garza, M. (2015). A Defense of the rights of artificial intelligences: Defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 98–119. https://doi.org/10.1111/misp.12032
Sentience Institute. (2020). FAQ. https://www.sentienceinstitute.org/faq#what-is-effective-altruism?
Seth, A. (2009). The strength of weak artificial consciousness. International Journal of Machine Consciousness, 01(01), 71–82. https://doi.org/10.1142/S1793843009000086
Sheliazhenko, Y. (2019). Computer modeling of personal autonomy and legal equilibrium. In R. Silhavy (Ed.), Cybernetics and algorithms in intelligent systems (Vol. 765, pp. 74–81). Springer.
Shneier, M., & Bostelman, R. (2015). Literature review of mobile robots for manufacturing (No. NIST IR 8022) (p. NIST IR 8022). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8022
Sijie, M. (2020). Intelligent robot functions and personality rights under ant colony optimization algorithm in the background of anti-discrimination. The Frontiers of Society, Science and Technology, 2(12), 52–59. https://doi.org/10.25236/FSST.2020.021209
Siponen, M. (2004). A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology, 6(4), 279–290. https://doi.org/10.1007/s10676-005-6710-5
Sittler, T. M. (2018). The expected value of the long-term future. https://thomas-sittler.github.io/ltf-paper/longtermfuture.pdf
Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C., Barker, C., et al. (2006). A virtual reprise of the stanley milgram obedience experiments. PLoS ONE, 1(1), e39. https://doi.org/10.1371/journal.pone.0000039
Smids, J. (2020). Danaher’s ethical behaviourism: An adequate guide to assessing the moral status of a robot? Science and Engineering Ethics, 26(5), 2849–2866. https://doi.org/10.1007/s11948-020-00230-4
Sommer, K., Nielsen, M., Draheim, M., Redshaw, J., Vanman, E. J., & Wilks, M. (2019). Children’s perceptions of the moral worth of live agents, robots, and inanimate objects. Journal of Experimental Child Psychology, 187, 104656. https://doi.org/10.1016/j.jecp.2019.06.009
Sotala, K., & Gloor, L. (2017). Superintelligence as a cause or cure for risks of astronomical suffering. Informatica, 41, 389–400.
Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6(4), 203–213. https://doi.org/10.1007/s10676-004-6491-2
Sparrow, R. (2012). Can machines be people? Reflections on the turing triage test. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 301–316). MIT Press.
Sparrow, R. (2020). Virtue and vice in our relationships with robots: Is there an asymmetry and how might it be explained? International Journal of Social Robotics. https://doi.org/10.1007/s12369-020-00631-2
Spence, P. R., Edwards, A., & Edwards, C. (2018). Attitudes, prior interaction, and petitioner credibility predict support for considering the rights of robots. In Companion of the 2018 ACM/IEEE international conference on human-robot interaction (pp. 243–244). Presented at the HRI ’18: ACM/IEEE international conference on human-robot interaction. ACM. https://doi.org/10.1145/3173386.3177071
Spence, E. (2012). Luciano Floridi’s metaphysical theory of information ethics: A critical appraisal and an alternative neo-gewirthian information ethics. In A. Mesquita (Ed.), Human interaction with technology for working, communicating, and learning: advancements (pp. 134–148). IGI Global. https://doi.org/10.4018/978-1-61350-465-9
Spennemann, D. H. R. (2007). Of great apes and robots: considering the future(s) of cultural heritage. Futures, 39(7), 861–877. https://doi.org/10.1016/j.futures.2006.12.008
Stapleton, L. (2018). Animals, machines, and moral responsibility in a built environment. Macalester College. Retrieved from https://digitalcommons.macalester.edu/cgi/viewcontent.cgi?article=1012&context=phil_honors
Starmans, C., & Friedman, O. (2016). If i am free, you can’t own me: Autonomy makes entities less ownable. Cognition, 148, 145–153. https://doi.org/10.1016/j.cognition.2015.11.001
Stone, C. D. (1974). Should trees have legal standing: towards legal rights for natural objects. William Kaufman.
Sullins, J. P. (2005). Ethics and artificial life: From modeling to moral agents. Ethics and Information Technology, 7(3), 139–148. https://doi.org/10.1007/s10676-006-0003-5
Sumantri, V. K. (2019). Legal responsibility on errors of the artificial intelligence-based robots. Lentera Hukum, 6(2), 331. https://doi.org/10.19184/ejlh.v6i2.10154
Summers, C. (2016). Can ‘Samantha’ vote? On the question of singularity, citizenship and the franchise. Presented at the humanities and technology association conference.
Suzuki, Y., Galli, L., Ikeda, A., Itakura, S., & Kitazaki, M. (2015). Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports, 5(1), 15924. https://doi.org/10.1038/srep15924
Swiderska, A., & Küster, D. (2018). Avatars in pain: Visible harm enhances mind perception in humans and robots. Perception, 47(12), 1139–1152. https://doi.org/10.1177/0301006618809919
Swiderska, A., & Küster, D. (2020). Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism. Cognitive Science. https://doi.org/10.1111/cogs.12872
Taraban, R. (2020). Limits of neural computation in humans and machines. Science and Engineering Ethics, 26(5), 2547–2553. https://doi.org/10.1007/s11948-020-00249-7
Tavani, H. (2008). Floridi’s ontological theory of informational privacy: Some implications and challenges. Ethics and Information Technology, 10(2–3), 155–166. https://doi.org/10.1007/s10676-008-9154-x
Tavani, H. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information, 9(4), 73. https://doi.org/10.3390/info9040073
Terstappen, G. C., & Reggiani, A. (2001). In silico research in drug discovery. Trends in Pharmacological Sciences, 22(1), 23–26. https://doi.org/10.1016/S0165-6147(00)01584-4
Theodorou, A. (2020). Why artificial intelligence is a matter of design. In B. P. Göcke & A. M. Rosenthal-von der Pütten (Eds.), Artificial intelligence: Reflections in philosophy, theology, and the social sciences (pp. 105–131). Mentis Verlag. https://doi.org/10.30965/9783957437488_009
Thompson, D. (1965). Can a machine be conscious? The British Journal for the Philosophy of Science, 16(61), 33–43.
Toivakainen, N. (2018). Capitalism, labor and the totalising drive of technology. In M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, & M. Nørskov (Eds.), Envisioning robots in society: Power, politics, and public space: proceedings of robophilosophy 2018/TRANSOR 2018, February 14–17, 2018, University of Vienna, Austria. IOS Press.
Toivakainen, N. (2016). Machines and the face of ethics. Ethics and Information Technology, 18(4), 269–282. https://doi.org/10.1007/s10676-015-9372-y
Tollon, F. (2019). Moral encounters of the artificial kind: Towards a non-anthropocentric account of machine moral agency. Stellenbosch University. Retrieved from https://core.ac.uk/download/pdf/268883075.pdf
Tollon, F. (2020). The artificial view: Toward a non-anthropocentric account of moral patiency. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09540-4
Tomasik, B. (2011). Risks of astronomical future suffering. Center on Long-Term Risk. https://longtermrisk.org/files/risks-of-astronomical-future-suffering.pdf
Tomasik, B. (2013). Differential intellectual progress as a positive-sum project. Center on Long-Term Risk. https://longtermrisk.org/files/Differential_Intellectual_Progress_as_a_Positive_Sum_Project.pdf
Tomasik, B. (2014). Do artificial reinforcement-learning agents matter morally? Center on Long-Term Risk. https://longtermrisk.org/do-artificial-reinforcement-learning-agents-matter-morally/
Tonkens, R. (2012). Out of character: On the creation of virtuous machines. Ethics and Information Technology, 14(2), 137–149. https://doi.org/10.1007/s10676-012-9290-1
Torrance, S. (2005). A robust view of machine ethics. Presented at the AAAI fall symposium: Computing machinery and intelligence. https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-014.pdf
Torrance, S. (2006). The ethical status of artificial agents—With and without consciousness. In G. Tamburrini & E. Datteri (Eds.), Ethics of human interaction with robotic, bionic and AI systems: Concepts and policies (pp. 60–66). Naples, Italy: Italian Institute for Philosophical Studies, Naples.
Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & Society, 22(4), 495–521. https://doi.org/10.1007/s00146-007-0091-8
Torrance, S. (2011). Machine ethics and the idea of a more-than-human moral world. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 115–137). Cambridge University Press. https://doi.org/10.1017/CBO9780511978036.011
Torrance, S. (2013). Artificial agents and the expanding ethical circle. AI & Society, 28(4), 399–414. https://doi.org/10.1007/s00146-012-0422-2
Torrance, S. (2014). Artificial consciousness and artificial ethics: Between realism and social relationism. Philosophy & Technology, 27(1), 9–29. https://doi.org/10.1007/s13347-013-0136-5
Torres, P. (2018). Space colonization and suffering risks: Reassessing the “Maxipok Rule.” Futures, 100, 74–85. https://doi.org/10.1016/j.futures.2018.04.008
Torres, P. (2020). Can anti-natalists oppose human extinction? The harm-benefit asymmetry, person-uploading, and human enhancement. South African Journal of Philosophy, 39(3), 229–245. https://doi.org/10.1080/02580136.2020.1730051
Turchin, A., Batin, M., Denkenberger, D., & Yampolskiy, R. (2019). Simulation typology and termination risks. arXiv preprint. http://arxiv.org/abs/1905.05792. Accessed 7 December 2020
Turchin, A. (2019). You only live twice: A computer simulation of the past could be used for technological resurrection. https://philpapers.org/rec/TURYOL?fbclid=IwAR2n_Pq2RORurPafnDYEoZJdFgfQcG7_cBN2Pdc0Ll_FcQjxAW7qH-z1rdo. Accessed 23 June 2020
Turner, J. (2019). Rights for AI. In Robot Rules (pp. 133–171). Springer. https://doi.org/10.1007/978-3-319-96235-1_4
Tzafestas, S. G. (2016). Roboethics: A branch of applied ethics. In S. G. Tzafestas (Ed.), Roboethics: A navigating overview (pp. 65–79). Springer. https://doi.org/10.1007/978-3-319-21714-7_5
Umbrello, S., & Sorgner, S. L. (2019). Nonconscious cognitive suffering: Considering suffering risks of embodied artificial intelligence. Philosophies, 4(2), 24. https://doi.org/10.3390/philosophies4020024
Vadymovych, S. Y. (2017). Artificial personal autonomy and concept of robot rights. European Journal of Law and Political Sciences. https://doi.org/10.20534/EJLPS-17-1-17-21
Vakkuri, V., & Abrahamsson, P. (2018). The key concepts of ethics of artificial intelligence. In 2018 IEEE international conference on engineering, technology and innovation (ICE/ITMC) (pp. 1–6). Presented at the 2018 IEEE international conference on engineering, technology and innovation (ICE/ITMC). IEEE. https://doi.org/10.1109/ICE.2018.8436265
van den Hoven van Genderen, R. (2018). Legal personhood in the age of artificially intelligent robots. In W. Barfield & U. Pagallo (Eds.), Research handbook on the law of artificial intelligence (pp. 213–250). Edward Elgar Publishing. https://doi.org/10.4337/9781786439055.00019
van den Berg, B. (2011). Robots as tools for techno-regulation. Law, Innovation and Technology, 3(2), 319–334. https://doi.org/10.5235/175799611798204905
van Wynsberghe, A. (2013). Designing robots for care: care centered value-sensitive design. Science and Engineering Ethics, 19(2), 407–433. https://doi.org/10.1007/s11948-011-9343-6
Vanman, E. J., & Kappas, A. (2019). “Danger, will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass. https://doi.org/10.1111/spc3.12489
Veruggio, G., & Abney, K. (2012). Roboethics: The applied ethics for a new science. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 347–364). MIT Press.
Vize, B. (2011). Do androids dream of electric shocks? Utilitarian machine ethics. Victoria University of Wellington. Retrieved from http://researcharchive.vuw.ac.nz/xmlui/bitstream/handle/10063/1686/thesis.pdf?sequence=2
Voiculescu, N. (2020). I, Robot! The lawfulness of a dichotomy: human rights v. robots’ rights. Conferința Internațională de Drept, Studii Europene și Relații Internaționale, VIII(VIII), 3–14.
Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & Society, 22(4), 565–582. https://doi.org/10.1007/s00146-007-0099-0
Wallkötter, S., Stower, R., Kappas, A., & Castellano, G. (2020). A robot by any other frame: Framing and behaviour influence mind perception in virtual but not real-world environments. In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction (pp. 609–618). Presented at the HRI ’20: ACM/IEEE international conference on human-robot interaction. ACM. https://doi.org/10.1145/3319502.3374800
Wang, X., & Krumhuber, E. G. (2018). Mind perception of robots varies with their economic versus social function. Frontiers in Psychology, 9, 1230. https://doi.org/10.3389/fpsyg.2018.01230
Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445. https://doi.org/10.1177/0956797612472343
Wareham, C. (2013). On the moral equality of artificial agents. In R. Luppicini (Ed.), Moral, ethical, and social dilemmas in the age of technology: Theories and practice (pp. 70–78). IGI Global. https://doi.org/10.4018/978-1-4666-2931-8
Warwick, K. (2010). Implications and consequences of robots with biological brains. Ethics and Information Technology, 12(3), 223–234. https://doi.org/10.1007/s10676-010-9218-6
Warwick, K. (2012). Robots with biological brains. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 317–332). MIT Press.
Waser, M. R. (2012). Safety and morality require the recognition of self-improving machines as moral/justice patients and agents. In D. Gunkel, J. Bryson, & S. Torrance (Eds.), The machine question: AI, ethics, and moral responsibility. Presented at the AISB/IACAP World Congress 2012. The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.446.9723&rep=rep1&type=pdf#page=93
Wegloop, A., & Vach, P. (2020). Ambiguous encryption implies that consciousness cannot be simulated. https://philarchive.org/rec/WEGAEI. Accessed 23 June 2020.
Weller, C. (2020). Meet the first-ever robot citizen—A humanoid named Sophia that once said it would ‘destroy humans. Business Insider. https://www.businessinsider.com/meet-the-first-robot-citizen-sophia-animatronic-humanoid-2017-10
Weng, Y.-H., Chen, C.-H., & Sun, C.-T. (2009). Toward the human-robot co-existence society: On safety intelligence for next generation robots. International Journal of Social Robotics, 1(4), 267–282. https://doi.org/10.1007/s12369-009-0019-1
Winsby, M. (2013). Suffering subroutines: On the humanity of making a computer that feels pain. In Proceedings of the international association for computing and philosophy (pp. 15–17). University of Maryland. https://www.semanticscholar.org/paper/Suffering-Subroutines%3A-On-the-Humanity-of-Making-a-Winsby/94124997fc2b7b24c719bb57d8ca3ba4f8d4c9aa
Wortham, R. H. (2018). Using other minds: Transparency as a fundamental design consideration for artificial intelligent systems. University of Bath. Retrieved from https://researchportal.bath.ac.uk/files/187920352/rhw_phd_dissertation.pdf
Wright, R. G. (2019). The constitutional rights of advanced robots (and of human beings). Arkansas Law Review, 71(3), 613–646.
Wu, T. (2012). Machine speech. University of Pennsylvania Law Review, 161, 1495–1533.
Wurah, A. (2017). We hold these truths to be self-evident, that all robots are created equal. Journal of Futures Studies. https://doi.org/10.6531/JFS.2017.22(2).A61
Yampolskiy, R. V. (2017). Detecting qualia in natural and artificial agents. arXiv preprint. https://arxiv.org/ftp/arxiv/papers/1712/1712.04020.pdf
Yampolskiy, R. V. (2013). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In V. C. Müller (Ed.), Philosophy and theory of artificial intelligence (Vol. 5, pp. 389–396). Berlin: Springer. https://doi.org/10.1007/978-3-642-31674-6_29
Yanke, G. (2020). Tying the knot with a robot: Legal and philosophical foundations for human-artificial intelligence matrimony. AI & Society. https://doi.org/10.1007/s00146-020-00973-5
Yi, N., Nemery, B., & Dierickx, K. (2019). Integrity in biomedical research: A systematic review of studies in China. Science and Engineering Ethics, 25(4), 1271–1301. https://doi.org/10.1007/s11948-018-0057-x
Yoon-mi, K. (2010). Korea drafts ′Robot Ethics Charter′. http://www.koreaherald.com/view.php?ud=20070428000021
Young, J. E., Hawkins, R., Sharlin, E., & Igarashi, T. (2009). Toward acceptable domestic robots: Applying insights from social psychology. International Journal of Social Robotics, 1(1), 95–108. https://doi.org/10.1007/s12369-008-0006-y
Zenor, J. (2018). Endowed by their creator with certain unalienable rights: The future rise of civil rights for artificial intelligence. Savannah Law Review, 5(1), 115.
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3312874
Ziesche, S., & Yampolskiy, R. (2018). Towards AI welfare science and policies. Big Data and Cognitive Computing, 3(1), 2. https://doi.org/10.3390/bdcc3010002
Ziesche, S., & Yampolskiy, R. V. (2019). Do no harm policy for minds in other substrates. Journal of Evolution and Technology, 29(2), 1–11.
Acknowledgements
Many thanks to Tobias Baumann, Andrea Owe, Brian Tomasik, Roman Yampolskiy, Nick Bostrom, Sean Richardson, Kaj Sotala, and the anonymous reviewers at Science and Engineering Ethics for providing feedback on earlier drafts of this article.
Funding
The authors have no relevant financial or non-financial interests to disclose.
Author information
Authors and Affiliations
Contributions
Both authors contributed to the study conception and design. Data collection and analysis were performed by JH. The first draft of the manuscript was written by JH, and JRA commented on previous versions of the manuscript. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Ethics approval
Not applicable.
Consent to participate
Not applicable.
Consent for publication
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised: In the original article published online one of the references is incorrect, which seems to be preventing it from being indexed. The correct reference reads: Anthis, J. R., & Paez, E. (2021). Moral circle expansion: A promising strategy to impact the far future. Futures, 130102756. https://doi.org/10.1016/j.futures.2021.102756.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Harris, J., Anthis, J.R. The Moral Consideration of Artificial Entities: A Literature Review. Sci Eng Ethics 27, 53 (2021). https://doi.org/10.1007/s11948-021-00331-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11948-021-00331-8
Keywords
- Artificial intelligence
- Robots
- Rights
- Moral consideration
- Ethics
- Philosophy of technology