Abstract
Whether copyrights should exist in content generated by an artificial intelligence is a frequently discussed issue in the legal literature. Most of the discussion focuses on economic rights, whereas the relationship of artificial intelligence and moral rights remains relatively obscure. However, as moral rights traditionally aim at protecting the author’s “personal sphere”, the question whether the law should recognize such protection in the content produced by machines is pressing; this is especially true considering that artificial intelligence is continuously further developed and increasingly hard to comprehend for human beings. This paper first provides the background on the protection of moral rights under existing international, U.S. and European copyright laws. On this basis, the paper then proceeds to highlight special issues in connection with moral rights and content produced by artificial intelligence, in particular whether an artificial intelligence itself, the creator or users of an artificial intelligence should be considered as owners of moral rights. Finally, the present research discusses possible future solutions, in particular alternative forms of attribution rights or the introduction of related rights.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The idea for this paper stems from a conference paper submitted to IRIS 2019 (Miernicki and Ng 2019).
Artificial Intelligence (AI)Footnote 2 is often considered as a disruptive technology. The implications of this technology are not confined to the industrial sector, but do extent to numerous artistic fields, like the creation of music or works of art (Bridy 2016; Niebla Zatarain 2018; Schönberger 2018). As the technology promises great advances in a wide variety of contexts and areas of research, massive initiatives and investments are being undertaken; this also applies to the political level (European Commission 2018b). However, the use of AI can have far reaching consequences and many problems are still not fully explored. This relates, for instance, to the technology’s philosophical and economic implications, but also to the legal framework that governs its use. Widely discussed legal questions include liability issues, especially in the context of autonomous driving (Collingwood 2017), data protection (Kuner et al. 2018) as well as the protection of AI and its products (Abbott 2016; Vertinsky and Rice 2002) under copyright law (Grimmelmann 2016a, b). This also relates to machine learning (Surden 2014). The copyright-related literature has, as far as can be seen, focused on the question whether economic rights exist or should exist in AI-generated content. However, the relationship between AI and moral rights has not been studied to a comparable extent (Miernicki and Ng 2019; Yanisky-Ravid 2017). Against this background, this research analyzes the relationship between AI and moral copyrights. For the sake of completeness, we will also allude to economic rights in AI-generated content, where appropriate.
The relation between AI and copyright can be studied in two major respects. Commonly, the legal literature distinguishes issues that arise “upstream” and those that arise “downstream”. Upstream refers to legal issues that arise from making a machine capable of generating content which would include the implications of machine learning, or whether existing exceptions and limitations to copyright would apply (or should be enacted) (Grimmelmann 2016a, b; Stewart 2014). Moreover, this relates to the questions whether or how AI could be “taught” to avoid copyright infringement or “interpret” legal provisions (Schafer et al. 2015). However, for the purposes of this contribution, we will focus on the downstream issues of AI and copyright. This refers to the content generated by the AI and explores the question whether or to what extent copyright law grants protection to this content; this naturally encompasses the question to whom (or what) the law should allocate (exclusive) copyrights (Schafer et al. 2015; Schönberger 2018).
2 Legal background
Moral rights acknowledge that authors have personal interests in their creations and the corresponding use that is made of them. These interests are conceptually different from the economic or commercial interests protected by the author’s economic rights which are typically understood to enable the author to derive financial gain from her creation (Rigamonti 2007). Moral rights thus aim to protect the non-economic interests of the author; this is often justified with reference to a “presumed intimate” bond of the creator with his or her creations (Rigamonti 2006). In this light, moral rights protect the personality of the author in the work (Biron 2014). Not surprisingly, different jurisdictions have varied takes on this school of thought and do not interpret this ideology to the same extent. In this regard, it is generally said that common law jurisdictions are more hesitant to granting moral rights than civil law jurisdictions (Rigamonti 2007; Schére 2018). This might explain why—even though moral rights can be found in various jurisdictions throughout the world—the degree of international harmonization with regard to moral rights is rather low (Rigamonti 2006). The general principles are set forth by the Berne Convention (WIPO 1979); its article Art 6bis states that “the author shall have the right to claim authorship of the work and to object to any distortion, mutilation or other modification of, or other derogatory action in relation to, the said work, which would be prejudicial to his honor or reputation.” As can be seen, the Berne Convention provides for two distinct moral rights: The right of attribution and the right of integrity of the work (Rigamonti 2006; cf. U.S. Court of Appeals 1st Circuit 2010). The right of attribution generally includes the right to be recognized as the author of a work, so that users of the work as well as the public in general will associate the work with its creator (Ciolino 1995), which is also linked to a certain kind of social recognition and appreciation; recognition for one’s work is sometimes deemed a “basic human desire” (US Copyright Office 2019). The right to integrity, in turn, refers to the author’s interest not to have his or her work be altered drastically or used in a prejudicial way. Whether there is an infringement of the right to integrity is very dependent on the individual case as well as, importantly, the context of the use. Under this rule, moral rights could be infringed if, for instance, a song is rearranged or used for a purpose completely different from the author’s intentions (Ricketson and Ginsburg 2006).
Moral rights came in special focus in the U.S. legal regime with the accession of the United States to the Berne Convention (Ciolino 1995). At that time, legislative changes were not made because the moral rights contained in the convention were already, according to Congress’s opinion, provided for a sufficient extent under U.S. law (U.S. Copyright Office 2019). Later, moral rights were explicitly recognized in the Visual Artist Rights Act (17 U.S.C. § 106A); however, the scope of the act is relatively narrow (U.S. Copyright Office 2019; cf. U.S. Court of Appeals 1st Circuit 2010). In fact, the transposition of the Berne Convention’s requirements into U.S. law as regards moral rights has been a long source of controversy (Ginsburg 2004; Rigamonti 2006). In any event, however, it is fair not to only look at the U.S. Copyright Act, but also at other laws on the federal level as well as common law claims that can arise under state law, for instance (Rigamonti 2007; U.S. Copyright Office 2019).
Against the background of the aim to strengthen the internal market, European copyright law has focused on the harmonization of economic rights while moral rights have never been harmonized to a comparable extent (Sirvinskaite 2010). To this end, a number of directives explicitly clarify that they are not meant to apply to moral rights which may or may not be granted under the member states’ laws (Council of the European Communities 1993; European Parliament and Council of the European Union 1996, 2006; European Parliament and the Council 2001, 2009, 2012, 2019). The member states thus retain a great amount of freedom in respect of their moral rights legislation (von Lewinski and Walter 2010) and it appears that this area of copyright law will not be dealt with on the European level in the near future (cf. Commission of the European Communities 2004). At the same time, however, the lack of harmonization has been called an “intrinsic obstacle” for the development of a common European copyright legislation (Sirvinskaite 2010).
As can be seen, there is—both on the international and the European level—little harmonization across borders in the context of moral rights (Pettenati 2000) and even the degree of harmonization in the Berne Convention is not uncontroversial. Strictly speaking, there is not even a common concept of moral rights in countries that follow the civil law tradition. In general, one distinguishes the “Monist Theory” and the “Dualist Theory” (Chisolm 2018; Rigamonti 2007). While moral rights are genuinely based on copyright and are closely integrated with economic rights under the Monist Theory, the Dualist Theory derives moral rights from a general personality right (Rigamonti 2007). Hence, a clear distinction between moral rights and economic rights is not always possible. In fact, an author could derive financial value from her right to be identified as the author (“traditional” moral right) (Hansmann and Santilli 1997; Tang 2012) or use her right to copy (“traditional” economic right) to prohibit the reproduction of her work for various non-economic reasons. Moreover, national laws can grant additional moral rights within the framework of the Berne Convention (e.g., withdrawal rights) (Ricketson and Ginsburg 2006). However, hereinafter, when referring to moral rights, we refer to moral rights as traditionally understood (attribution and integrity) in order to highlight the special issues that occur in context of AI-generated content.
3 Protection of AI-generated content
The key question of much of the copyright-related debate on AI is whether or to what extent copyrightable works require or should require human action. Under the Berne Convention, there is no clear definition of the concept of “authorship”. However, it is strongly suggested that the convention only refers to human creators (Ginsburg 2018; Ricketson 1991), thereby excluding AI-generated content from its scope. This means that the minimum standard set forth by the Berne Convention only applies to works made by humans.
U.S. copyright law affords protection to “original works of authorship” (17 U.S.C. § 102(a)). This language is understood as referring to creations of human beings only (Clifford 1997; U.S. Copyright Office 2017); accordingly, the law denies copyright protection for the “creations” of animals, the so-called “monkey selfie” case (U.S. District Court Northern District of California 2016)Footnote 3 being a notable example for the application of these principle in the courts. This equally applies to content produced by machines (U.S. Copyright Office 2017) or, in the present context, AI-generated content (Abbott 2016; Yanisky-Ravid 2017). In consequence, such content is not copyrightable, unless a human author is found to have contributed creative input.
Since there is no general EU copyright code but rather several directives with their respective scope, it is not easy to distill the concept of authorship under EU law. However, it is possible to infer some guidance from the language of the different directives. On the one hand, the “own intellectual creation” standard is set forth in respect of databases, computer programs and photographs (European Parliament and Council of the European Union 1996, art. 3(1); European Parliament and Council of the European Union 2006, art. 6; European Parliament and the Council 2009, art. 1(3); see also European Parliament and the Council 2019, art. 14). On the other hand, the “Copyright Directive” (European Parliament and the Council 2001) refers to “authors” and “works”. With regard to first standard, the ECJ establishes the connection to the author’s personality (European Court of Justice 2011a; European Court of Justice 2012), a line of argumentation that can also be found in connection with the “Copyright Directive” (European Parliament and the Council 2001) (European Court of Justice 2008; European Court of Justice 2011b). Accordingly, many commentators conclude the human authorship is required under European copyright law (Handig 2009; Ihalainen 2018; Miernicki and Ng 2019; Niebla Zatarain 2018).
Conversely, member states may decide to expand copyright protection to content produced by machines and AI (Miernicki and Ng 2019). In this connection, the UK Copyright, Designs and Patents Act (United Kingdom 1988) should be mentioned. Under this act, the author of a “computer-generated” work (such a work “is generated by computer in circumstances such that there is no human author of the work”, see s.178) is considered as “the person by whom the arrangements necessary for the creation of the work are undertaken” (s.9) (MacCutcheon 2013). What is meant by “arrangements”, however, remains a source of debate (Davies 2011; Perry and Margoni 2010; Lambert 2017). Most importantly, however, the CDPA exempts computer-generated works from the moral rights framework so that no attribution or integrity rights are conferred upon the authors of such works (s. 79, 81).
4 Ownership in AI-generated content
The copyrightability of AI-generated content is inherently linked to the ownership of the corresponding copyrights. Much has been written about which persons (if any) should be considered when assigning those rights (see, e.g., Bridy 2012; Davies 2011; Galajdová 2018; Glasser 2001; Hristov 2017; Perry-Margoni 2010; Wu 1997; Yu 2017). In the following, we discuss the AI itself, the creators of the AI and the users of the AI as possible holders of moral rights.
4.1 AI as the owner of moral rights
The concept of AI as a possible owner falls on the fact that AI is not simply an automatic system but could be an autonomous system. While the development of an AI system, including data mining, machine learning and training processes, are normally supervised by humans, recent advancements in AI technology have enabled AI to learn from other AIs, in a process called “kickstarting”.Footnote 4 The “black box” of AI (Knight 2017), where developers of AI networks are unable to explain why their AI programs have produced a certain result, further augments the belief of autonomy of AI and its capability of having its own persona and decision-making abilities.
However, it should be noted that any discussion centering on AI as copyright owners is essentially a discussion de lege ferenda. This is because for machines to be granted rights, some form of legal personality would be required (Bridy 2012; Yu 2017). It is our impression that this is not the case for many jurisdictions in Europe, America and Asia. In turn, whether machines should be granted some form of legal personality is in fact discussed on different levels. This relates, of course, to academic literature (Günther et al. 2012; Paulius et al. 2017; Solaiman 2017), but also to legislative initiatives (Committee on Legal Affairs (2015).Footnote 5 However, often times the discussion in this context centers on liability issues which are a different question from the grant of moral copyrights (Miernicki and Ng 2019); the attribution of liability is not the same as allocating exclusive rights, although the former must certainly be considered when acknowledging AI as legal persons. This is especially true if a legislation granting AI the status of a legal person (similar to a company or a partnership) would apply across other legislations unless precluded by that specific legislation otherwise.
As aforementioned, moral rights, in the most basic scenario, aim at protecting interests such as being recognized as the author of a work or object to certain modifications to it. Hence, one must ask (1) whether an AI can have such interests and (2) whether that the law deems such interest worthy of protection against the background of the theoretical basis of moral rights. The first aspect should be distinguished from the related question whether a machine can be “creative” in the way humans are. This is, in fact, highly debated (Bridy 2012; Bridy 2016; McCormack et al. 2019; Schafer et al. 2015; Yanskiy-Ravid 2017). However, we think it is fair to assume that machines have, already from today’s perspective, the potential to produce works which would be indistinguishable from human creations (Holder et al. 2016) and thus would, in principle, meet the originality threshold for copyright protection. As can be seen, we consider the concept of creativity in strong proximity to the originality threshold. This threshold is—generally speaking—rather low in the EU and the United States, for instance.Footnote 6 Thus, from this perspective, copyright protection should be considered.
However, in our opinion, it is questionable whether the ability to produce copyrightable content is equivalent to having a personality sphere that moral rights aim to protect. It seems that “creativity” or, as the law of many countries call it, originality, embodied in a work is a basic prerequisite for enjoying moral rights, as otherwise there would simply be no protectable subject matter (i.e. a work) (Rigamonti 2007).Footnote 7 Yet, moral rights are based on the additional concept of a “personality” that specifically addresses the author’s non-economic interests (Perry and Margoni 2010).Footnote 8
Since the right of personality theory, much like the moral rights orthodoxy, understands the work to be an expression of the author's personhood, it is easy to see how authors complaining about unauthorized modifications of their works, for instance, could argue that the modifications in question violated their personhood expressed in their works (Rigamonti 2007).
This is, as stated above, a concept different from legal personality in the sense of the attribution of liability. Thus, we consider the concept “personality” in strong proximity to the non-economic interests protected by moral rights and distinguish these concepts from the originality—creativity dichotomy. From a conceptual perspective, the requirements of both dichotomies must be fulfilled for moral rights to be justified. This is illustrated by Table 1.
In light of the conceptual framework set forth above, the question remains whether an AI can have a personality sphere and hence related non-economic interests. This issue is, at first glance, related to technological developments and contingent upon the definition of said terms.Footnote 9 Against the background of moral rights theory, however, it seems that this determination is not only based on strict logical reasoning. While economic rights can first and foremost be explained by the objective aim to incentivize the production of creative works, moral rights appear to be highly based on ideological values, the legal recognition of which is the task of the legislator and not lawyers or economists. As already noted above, this is not to say that moral rights cannot be of economic value: e.g., the right to be named as the author can be used to build up the author’s reputation, leading to increasing sales, or the author may (depending on the jurisdiction) choose not to execute this right against a monetary payment when she serves as a ghostwriter (Davies 2011). Furthermore, moral rights can also be seen as a way to incentivize authors (cf. U.S. Court of Appeals 2nd Circuit 1995; U.S. Court of Appeals 1st Circuit 2010). However, these examples appear to be rather ancillary effects; the protection of the author’s personality remains the primary objective and justification for the introduction of moral rights (cf. U.S. Court of Appeals 2nd Circuit 1995; U.S. Court of Appeals 1st Circuit 2010). The “author’s personality” is, of course, an elusive concept; indeed, not all jurisdictions recognize the protection of non-economic interests to the same extent. This can best be illustrated by the small degree of international harmonization with regard to moral rights (see above) and the inverse situation in respect of economic rights. Especially with regard to a personality sphere for an AI, the question can be complicated by ethical and religious concepts (Davies 2011).
Against this background, we can only give our opinion whether moral rights for an AI would be consistent with the development and rationales of moral rights as well as their foundations de lege lata. It appears to us that the moral rights ideology is closely connected to the creations made by human beings (Miernicki and Ng 2019); thus, applying moral rights to an AI would constitute an extension of the underlying rationales of moral rights and would therefore require additional justification.
4.2 Creators or users of an AI as the owners of moral rights
Should humans that interact with the AI—i.e., the creators or the users—should have moral rights in the content produced by the AI? Where software is a tool used by humans, the general rules apply: Where the programmer contributes creative effort, she is awarded a copyright the extent of which is determined by national law. However, the situation is different where the AI produces the content without original input contributed a human being (Miernicki and Ng 2019).
In order to analyze this question, it is helpful to conceptualize the different roles of the AI and the involved humans as follows: The programmer creates the AI and holds, generally speaking, a copyright in a literary work (cf. WTO 1994, Art 9 et seq.). Thus, under ordinary circumstances, the AI constitutes copyrightable subject-matter and is referred to hereinafter as “first generation work” because it directly stems from the programmer’s creative effort (cf. Yanisky-Ravid and Velez-Hernandez 2018). Going a step further, if this work (software) generates new content, we could refer to this content as a “second generation work” since—provided that the programmer did not intervene at all—it only indirectly stems from the programmer’s creative work.Footnote 10 Now the question arises whether the creator has a special non-economic connection to the “second generation work” that could be protected by moral rights. To answer this question, it is useful to recall the fundamental rationales of moral rights: these rights are granted because the work represents “an extension of the author’s personhood” (Rigamonti 2006); the author’s personal traits are, so to say, “embodied” in the work (cf. Rosenthal Kwall 2010). Conversely, in absence of this creative endeavor, there is a lack of the intimate connection between the creator of the AI and the produced content that moral rights have traditionally sought to protect (Miernicki and Ng 2019). In this light, the relationship between programmer and the AI’s output is “radically mediated” (Bridy 2012; cf. Perry and Margoni 2010). Thus, from a conceptual perspective, the “second generation work” does not carry the programmer’s personal traits to an extent comparable to the “first generation work” (Miernicki and Ng 2019).
As can be seen, this proposition constitutes the reverse situation as observed in connection with the AI as holder of moral rights: This time, humans who can have non-economic interests in the moral rights sense are involved as potential right holders; however, the degree of creativity in the production of the “second generation work” is questionable. This lack of creativity also seems to be the reason why moral rights are expressly excluded from the protection of CGW in the UK (Miernicki and Ng 2019). The legislative materials contain a passage stating that:
[m]oral rights are closely concerned with the personal nature of creative effort, and the person by whom the arrangements necessary for the creation of a computer-generated work are undertaken will not himself have made any personal, creative effort (U.K. House of Lords 1988).
In a similar fashion (however, with a different justification) the WIPO Committee of Experts argued:
Since computer-produced works have no identifiable authors, it is necessary to include specific provisions in the possible Protocol concerning original ownership and the term of protection of copyright in such works; for the same reasons, moral rights would not be applicable in the case of such works (WIPO 1991).
Against this background, granting moral rights would amount to not only protecting the author’s “spiritual child” (cf. Ciolino 1995; Yanisky-Ravid 2017) but also its “spiritual grand-child”. One could of course carry the idea a bit further: What if the AI generates another AI that itself produces “creative content”? Are there non-economic interests in such a “third generation work” that could be harmed, e.g., by its modification? In our view, the connection to the human author becomes genuinely elusive and an “intimate bond” is increasingly hard to observe. In this regard, we suggest that the principle of the “first generation work” and “second generation work” should apply similarly—i.e. this would be an “x generation work” that is traceable to the original creator of the first AI that spawned the subsequent AI-progeny. This is in fact sometimes analogized with the relation between a parent and her children’s creations (Yanisky-Ravid 2017). This is even more apparent with users/operators (Schaefer et al. 2015) of AI, because in many situations they will not have contributed any creative input (not even in the process of creating the “first generation work”), but merely triggered or instigate the AI’s productive process.
One could also look at this issue from the perspective of the idea-expression dichotomy. This dichotomy describes the well-known principle that only expression is generally protected by copyright, whereas ideas are generally not. In the case of content autonomously produced by an AI, while one could consider the content as “expression”, there is a conspicuous absence of an idea which the expression emanates from (Glasser 2001; Schaefer et al. 2015; McCormack et al. 2019). Thus, the fact that the programmer—by creating the AI—“laid the foundations” for the generation of new content is not enough to justify moral rights protection, in other words: It is not enough to have the abstract intention to create “something”; the intention to create is by itself not an expression of an individual personality protected by moral rights and neither will be the intention to create an “artwork” or a “text”.
As mentioned before, the situation is different where the AI serves as a tool that helps the author to materialize her creative vision (Glasser 2001). Only in this situation, the recognition and appreciation connected to the grant of moral rights is well founded. Clearly, it must be determined in the individual case whether the programmer or user of an AI contributed creative input; this analysis is not meant to limit copyrightability to a minimum and can in fact extend further than one might think. Consider the “monkey-selfie” case referred to above. One might argue that the owner of the camera that was used by the monkey should be awarded copyright protection in pictures shot by the animal because he undertook arrangements and a creative selection process (Guadamuz 2016).
5 Attribution, delineation and evidence
Some authors highlight delineation issues, stating that it can be very hard to determine whether (or to what extent) a human being contributed original content (Butler 1982; Denicola 2010; Guadamuz 2017; Hernandez 2018; Yanisky-Ravid 2017). While this is undoubtedly true (even in full knowledge of the facts of the case) this does not mean that copyright (and, specifically moral rights) protection in AI-generated should be granted just for the sake of circumventing or “streamlining” this issue. Very similar issues arise in connection with co-authorship, where it can easily be unclear whether a person, based on her contributions to the final work, should be considered as co-authors. While this determination can require a cumbersome analysis, one should not treat every person involved in any minimal way in the process of the creation of the work should as a co-author, just to resolve the issue of authorship in a more efficient way. The law requires this analysis not to save costs, but rather to serve the greater goal of copyright law as well as a fair allocation of moral rights.Footnote 11 Copyright law will always involve difficult delineation issues and many questions are—also in connection with AI—not new (Grimmelmann 2016a, b). Alternative solutions have been proposed: This could relate, for instance, to a new category of attribution right that distinguishes proper “authorship” from mere “contributorship” (Bently and Biron 2014).Footnote 12 Applied to AI, this would mean that the programmers or users of the AI would be considered as “contributors” rather than “authors”. This could be coupled with the indication that the work was produced by an AI. One could understand the latter not as an attribution right for the AI, but rather as a form of “labelling duty” when exercising the attribution right with regard to “contributorship”. This solution could also address concerns that human creativity would be endangered by AI-generated content (cf. Schönberger 2018). The possible framework is outlined in Table 2 below.Footnote 13
A practical problem that cannot be easily resolved—neither in connection with economic nor moral rights—relates to establishing whether a work is actually the product of human creative endeavors. After all, why should the creator or user of an AI disclose that a content was AI-generated? In many cases, there will be a strong incentive to conceal this fact to receive full protection under copyright law (Abbott 2016; McCormack et al. 2019; Samuelson 1986) at least with regard to those jurisdictions that require human authorship. Admittedly, the evidentiary issues also occur in other situations, for example, if more than one person is involved in the creation of a work. However, if one of these persons claims to be the sole author, there are other persons to contest this assertion (Butler 1982). In the case of AI, it is, for the time being, hardly conceivable that the software would be able to claim and enforce any rights against its creator or user, even if the law grants the AI certain rights.
A straightforward solution would be to give copyrights to “someone”, e.g., the users or creators of the AI (Samuelson 1986). However, similar concerns as observed with respect to substantive delineation problems arise. At least, different to the co-authorship situation, allocating the rights in AI-generated content would not deprive any other persons of the rights because the alternative would be that the output would fall in the public domain (Grimmelmann 2016a, b). However, the potentially enormous quantities of AI-produced content can generate a different kind of costs that relates to the enforcement of the rights that would exist in this content. Consider, for instance, the “art experiment” centering around Qentis, a (fictive) company that claimed to have developed an AI that had generated virtually all possible texts between ten to four hundred words; the company intended to collect royalties from anyone that would use its content (Niebla Zatarain 2017). Even though it would be practically infeasible to implement and maintain such a system (Komuves et al. 2015), similar “business models” are not unlikely to arise if AI-generated was copyrightable (Denicola 2016; Koboldt 1995).Footnote 14 The opposing party would then have to prove that the work was independently created and not copied or modified. The costs of such potential litigation should not be underestimated (Perry and Margoni 2010; Wu 1997).
Conversely, there could be situations where the creator or user of an AI does not want to be associated with the AI that generated the content. This could involve situations where the AI has “run amok” and produces distasteful or illegal content that the attribution to such content would result in reputational damage or potential criminal charges. The “black box” of AI makes such a situation even more likely as developers of AI networks are unable to explain or control the results of their creations; in fact, Microsoft’s experiment to train its Twitter chatbot, Tay, on the social media platform Twitter, is an example of such an AI system that went rogue. Tay was intended to learn how to understand conversations by mimicking the language of Twitter users; however, less than 24 h after the chatbot was put online, Microsoft stopped posts from that account due to it posting “obscene statements” (Victor 2016). Should a criminal case be lodge against an AI developer for creating an AI system that produces illegal content, the concept of whether an AI can be attributed moral copyright and whether the test to determine when an AI developer would be liable for his creation becomes relevant. However, it should be highlighted that the attribution of liability and moral rights are separate and distinct legal concepts, and different laws would be administered under both the common law and civil law systems.
A radical solution to the evidentiary problems with respect to the contribution of a human being would be to shift the burden of proof to the person who claims to be the author, requiring her in a given case of dispute to establish that the content in question stems from a process of human creation in case of doubt or upon the other party’s request. Clearly, this appears to be a burdensome task. However, it might just be the case that lawyers and courts currently lack the experience with respect to such proceedings in practice, but such experience can be acquired over time. What is more, if the content was actually created by a human being, it should not be too hard to establish this fact. In any event, the shift of the burden of proof might not appear to be the most pressing issue today, it could be more relevant in case machine-made contents proliferates; in fact, already from today’s perspective, there is no safe way to tell how much AI-generated content has already been disseminated (Grimmelmann 2016a, b).
A further issue relates to unjustified claims of authorship; in this light, absent a copyright-based attribution right, it is worth considering whether such claims can be addressed by employing instruments offered by other legal fields, e.g. the laws against unfair competition or potentially trademark law (Tang 2017).
6 A related rights approach?
In light of the foregoing, we believe that granting moral rights in AI-generated content is, in principle, not compatible with the traditional rationale of these rights. Apart from alternative models to the attribution right that should be considered, it remains to discuss whether or to what extent there should be economic rights and how such rights would fit in the copyright system. This is in many respects a question of whether such rights can produce beneficial incentives (either for investing in the development of AI or publishing its results), a question which has been discussed at length (Davies 2011; Glasser 2001; Grimmelmann 2016a, b; McCutheon 2013; Perry and Margoni 2010; Samuelson 1986; Yu 2017); we have already made some arguments above that would also apply to economic rights and do not delve further into this debate in this research. For the present purposes, and from the moral rights perspective, it would be a regulatory perspective to grant—similar to the solution found in the UK—certain economic rights but no or only limited moral rights; this resembles the legal situation with regard to related rights (Miernicki and Ng 2019).Footnote 15 Since such a “middle-ground solution” might be more easily compatible with the different views that exist on moral rights, e.g., with regard to their scope, transfer or ownership by legal persons (Miernicki and Ng 2019; cf. Denicola 2016; Ory and Sorge 2019), it might be more likely to find a consensus for the international harmonization of this matter. However, also a related rights solution runs the risk of triggering a potential proliferation of protected content.
7 Conclusion
As a general principle, we believe that no moral rights should be granted in AI-generated content, based on existing laws and principles. While there is extensive debate on the possibility of AI having its own legal personality and thus the possibility of AI being capable of moral rights, much of this discussion is centered around AI liability—which has different considerations vis-à-vis moral rights. Even in light of arguments for AI to be granted moral rights, there is nuance in the type of moral rights that should or should not be granted to AI. First, it might be necessary to distinguish between the different forms of moral rights: While right to integrity is perhaps fundamentally rooted in the personal sphere of the author, the attribution right can also be explained on the basis of other foundations. In this connection, intermediate solutions are conceivable, such as the introduction of “contributorship” right, although there are practical problems such as establishing whether the work is indeed the product of human creative endeavors and associated evidentiary problems. One possible strand of thoughtFootnote 16 in favor of acknowledging moral rights is that these rights serve a higher public function—attributing moral rights to the original creators (who arguably have the greatest interest in protecting their own works) could not only be understood as serving the authors’ own interest but also the public interests in the integrity and societal status of creative works in general.Footnote 17 If such a public function exists, there may be an argument for protecting AI works for the greater good and thus a responsible person should be designated under the law to protect AI generated works. While national laws that provide for perpetual moral rights might point in this direction, we would like to reserve this discussion for future research.
Clearly, it is not easy to see where the technological development will take us; however, in connection with moral rights, a fundamental question becomes apparent: When will AI systems have a “personality” that the law recognizes? In fact, why should anyone program an AI with interests of “personal nature” that could be protected by moral rights? Perhaps, it is indeed as what some philosophers have argued on the topic of an AI being an artist—“creativity is, and always will be, a human endeavor” (Kelly 2019). And, even if this should be the case at some point we still have to make the decision from an ideological standpoint whether we should recognize AI, in this respect, as equal to us; in this light, granting moral rights to AI would truly start the “post human era” (Stewart 2014). However, if this era starts, “copyright will be the least of our concerns” (Grimmelmann 2016a, b; see also Clifford 1997).
Data availability
Not applicable.
Code availability
Not applicable.
Notes
There is no universally accepted definition of AI; in fact, many related concepts, like “robots”, “machine learning” and “AI” overlap (Lambert 2017). The European Commission, for instance, defines AI as referring to “systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals” (European Commission 2018a). For the purposes of this paper, we understand AI as the ability of computer software to produce content that meets the requirements of copyright protection or that would be copyrightable if created by a human being.
(“[T]here is no mention of animals anywhere in the [Copyright Act]. The Supreme Court and Ninth Circuit have repeatedly referred to "persons" or "human beings" when analyzing authorship under the Act […] Naruto is not an "author" within the meaning of the Copyright Act”).
Kickstarting here refers to the method of using a ‘teacher’ AI agent to “kickstart the training of a new ‘student’ (AI) agent” (Schmitt et al. 2018).
There, the Committee on Legal Affairs referred to a specific status of “electronic persons”. It is noteworthy that the proposal received serve criticism, see, e.g., Open Letter to the European Commission, Artificial Intelligence and Robots, www.robotics-openletter.eu (accessed 14 January 2020).
“That being so, given the requirement of a broad interpretation of the scope of the protection conferred by Article 2 of Directive 2001/29, the possibility may not be ruled out that certain isolated sentences, or even certain parts of sentences in the text in question, may be suitable for conveying to the reader the originality of a publication such as a newspaper article, by communicating to that reader an element which is, in itself, the expression of the intellectual creation of the author of that article. Such sentences or parts of sentences are, therefore, liable to come within the scope of the protection provided for in Article 2(a) of that directive” (European Court of Justice 2008); “These choices as to selection and arrangement, so long as they are made independently by the compiler and entail a minimal degree of creativity, are sufficiently original that Congress may protect such compilations through the copyright laws” (U.S. Supreme Court 1991).
One could argue that, where moral rights are derived from a general personality right, copyrightable subject-matter is not a necessary requirement.
To the extent that one considers economic rights to serve the protection of the author’s personal interests, this argument would in principle apply also to those rights.
The problems of definition appear to be especially challenging; a comparison with human behavior (e.g., responses to prejudices to the “moral sphere”) is difficult because every human is affected differently and the infringement of personality rights are far harder to quantify than economic damages.
There are examples, however, where the law takes a pragmatic rather than an ideological approach to rights ownership; a prominent example would be the copyright in cinematographic works (cf. WIPO 1979, art. 14bis).
In this different forms, this is already practiced in several contexts, cf., e.g. “All contributors who do not meet the criteria for authorship should be listed in an Acknowledgements section” (BMJ Author Hub 2018).
Note that we do not consider under which circumstances two persons can be considered as coowners of a work. This is essentially a question of national law; see, e.g., for U.S. law 17 U.S.C. § 201(a); “A “joint work” is a work prepared by two or more authors with the intention that their contributions be merged into inseparable or interdependent parts of a unitary whole” (17 U.S.C. § 101).
There are several other examples that suggest the ability of AI systems to produce ungraspable amounts of content that are most likely too great to ever be consumed or used by human beings. Clearly, one could ask whether copyright is bluntly meant to foster larger and larger quantities of protected works or, rather, to find the optimal number of protected works. For the optimal level of copyright protection, see, e.g. Koboldt 1995: “[…] the intensity of copyright protection should not induce the production of the maximum number of works […]”); see also Lunney 2014.
However, moral rights can also be found in this field. Yet, especially in case of performers’ rights, one could argue that a performance is also something very individual which is connected to the performer’s personality.
We thank an anonymous reviewer for contributing this idea during the review process.
Cf. Rushton 1998: “If the point of moral rights is to preserve those aspects of a culture that bind a nation together, then a case could be made for a perpetual moral right.”
References
Abbott R (2016) I Think, Therefore I invent: creative computers and the future of patent law. Boston College L Rev 57:1079–1126
Bently L, Biron L (2014) Discontinuities between legal conceptions of authorship and social practices: what, if anything, is to be done. In: van Eechoud M (ed) The work of authorship: creativity that counts. Amsterdam University Press/OAPEN library, Amsterdam, pp 237–276
Biron L (2014) Creative Work and communicative norms. In: van Eechoud M (ed) The work of authorship: creativity that counts. Amsterdam University Press/OAPEN library, Amsterdam, pp 20–44
BMJ Author Hub (2018) BMJ policy on authorship. https://authors.bmj.com/policies/bmj-policy-on-authorship/. Accessed 4 Nov 2019
Bridy A (2012) Coding creativity: copyright and the artificially intelligent author. Stan Tech L Rev 5:1–28
Bridy A (2016) The evolution of authorship: work made by code. Colum J L Arts 39:395–401
Butler TL (1982) Can a computer be an author—copyright aspects of artificial intelligence. Hastings Comm Ent L J 4:707–747
Čerka P, Grigienė J, Sirbikytė G (2017) Is it is possible to grant legal personality to artificial intelligence software systems? Comp L Secur Rev 33:685–699
Chisolm TE (2018) In Lieu of Moral Rights for IP-Wronged music vocalists: personhood theory, moral rights, and the WPPT revisited. St John’s L Rev 92:453–507
Ciolino DS (1995) Moral Rights and real obligations: a property-law framework for the protection of authors’ moral rights. Tulane L Rev 69:935–995
Clifford RD (1997) Intellectual property in the era of the creative computer program. Tulane L Rev 71:1675–1703
Collingwood L (2017) Privacy implications and liability issues of autonomous vehicles. Info Comm Tech L 26:32–45
Commission of the European Communities (2004) Commission Staff Working Paper on the review of the EC legal framework in the field of copyright and related rights. https://data.consilium.europa.eu/doc/document/ST-11634-2004-INIT/en/pdf. Accessed 14 Jan 2020
Committee on Legal Affairs (2015) Report with recommendations to the Commission on Civil Law Rules on Robotics 2015/2103(INL), P8_TA(2017)0051. https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf. Accessed 14 Jan 2020
Council of the European Communities (1993) Council Directive 93/83/EEC of 27 September 1993 on the coordination of certain rules concerning copyright and rights related to copyright applicable to satellite broadcasting and cable retransmission. O.J. L 1993/248, 15–21
Davies CR (2011) An evolutionary step in intellectual property rights—artificial intelligence and intellectual property. Comp L Secur Rev 27:601–619
Denicola RC (2018) Ex Machina: copyright protection for computer-generated works. Rutgers U L Rev 69:251–287
Durham AR (2002) The random muse: authorship and indeterminacy. Wm Mary L Rev 44(2):569–642
European Commission (2018a) Communication from the Commission to the European Parliament, The European Council, The Council, The European Economic and Social Committee and the Committee of the Regions, Artificial Intelligence for Europe. COM(2018) 237 final
European Commission (2018b) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions, Coordinated Plan on Artificial Intelligence. COM(2018) 795 final
European Court of Justice (2008) Infopaq Int’l A/S v. Danske Dagblades Forening C-5/08
European Court of Justice (2011a) Painer v. Standard VerlagsGmbH C-145/10
European Court of Justice (2011b) Football Association Premier League Ltd and Others v QC Leisure C-403/08.
European Court of Justice (2012) Football Dataco Ltd v. Yahoo! UK Ltd C-604/10
European Court of Justice (2013) Deckmyn v. Vandersteen C-201/13
European Parliament and the Council (2001) Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society. O.J. L 2001/167, 10–19
European Parliament and the Council (2009) Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs. O.J. L 2009/111, 16–22
European Parliament and the Council (2012) Directive 2012/28/EU of the European Parliament and of the Council of 25 October 2012 on certain permitted uses of orphan works. O.J. L 2012/299, 5–12
European Parliament and the Council (2019) Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC. O.J. L 2019/790, 92–125
European Parliament and Council of the European Union (1996) Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases. O.J. L 2001/167, 10–19
European Parliament and Council of the European Union (2006) Directive 2006/116/EC of the European Parliament and of the Council of 12 December 2006 on the term of protection of copyright and certain related rights. O.J. L 2006/372, 12–18
Galajdová D (2018) Deadlock in protection of software developed by AI. In: Schweighofer E, Kummer F, Saarenpää A, Schafer B (eds) Data protection/legal Tech. Editions Weblaw, Bern, pp 601–608
Ginsburg JC (2004) The right to claim authorship in U.S. copyright and trademark law. Hous L Rev 41:263–308
Ginsburg JC (2018) People not machines: authorship and what it means in the Berne convention. IIC 49:131–135
Glasser D (2001) Copyrights in computer-generated works: whom, if anyone, do we reward? Duke L Tech Rev 1:24
Grimmelmann J (2016a) Copyright for literate robots. Iowa L Rev 101:657–681
Grimmelmann J (2016) There’s no such thing as a computer-authored work—and it’s a good thing, too. Colum J L Arts 39:403–416
Guadamuz A (2016) The monkey selfie: copyright lessons for originality in photographs and internet jurisdiction. Internet Policy Rev 5:1–12
Guadamuz A (2017) Do androids dream of electric copyright? Comparative analysis of originiality in artificial intelligence generated work. Intellectual Property Quarterly 2:169–186
Günther J, Münch F, Beck S, Löffler S, Leroux C, Labruto R (2012) Issues of privacy and electronic personhood in robotics. In: IEEE (ed.) The 21st IEEE International Symposium on Robot and Human Interactive Communication
Handig C (2009) The Copyright Term “Work”—European harmonisation at an unknown level. IIC 40:665–685
Hansmann H, Santilli M (1997) Authors’ and artists‘ moral rights: a comparative legal and economic analysis. J Legal Stud 26:95–143
Holder C, Khurana V, Hook J, Bacon G (2016) Robotics and law: key legal and regulatory implications of the robotics age (part II of II). Comp L Secur Rev 32:557–576
Hristov K (2017) Artificial Intelligence and the Copyright Dilemma. IDEA 57:431–454
Ihalainen J (2018) Computer creativity: artificial intelligence and copyright. J Intellectual Property L Practice 13:724–728
Kelly S (2019) A philosopher argues that an AI can’t be an artist. MIT Tech Rev, 21 Februar 2019. https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/. Accessed 14 Jan 2020
Knight W (2017) The dark secret at the heart of AI, MIT Technology Review, 11 April 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/. Accessed 14 Jan 2020
Koboldt C (1995) Intellectual property and optimal copyright protection. J Cultural Econ 19:131–155
Komuves D, Niebla Zatarain J, Schafer B, Diver L (2015): Monkeying Around with Copyright—Animals, AIs and Authorship in Law, CREATe Working Paper 2015/02. https://www.create.ac.uk/publications/monkeying-around-with-copyright-animals-ais-and-authorship-in-law/. Accessed14 Jan 2020
Kuner C, Cate FH, Lynskey O, Millard C, Ni Loideain N, Svantesson DJB (2018) Expanding the artificial intelligence-data protection debate. Int Data Privacy L 8:289–292
Lambert P (2017) Computer-generated works and copyright: selfies, traps, robots, AI and machine learning. EIPR 39:12–20
Lunney GS (2014) Copyright’s Mercantilist Turn. Fla St U L Rev 42:95–150
McCormack J, Gifford T, Hutchings P (2019) Autonomy, authenticity, authorship and intention in computer generated art. In: Ekárt A, Liapis A, Luz Castro Pena M (eds) Computational intelligence in music, sound, art and design. Springer, Berlin, pp 35–50
McCutheon J (2013) The vanishing author in computer-generated works: a critical analysis of recent australian case law. Melb U L Rev 36:915–969
Miernicki M and Ng I (2019) Machines, attribution and integrity: artificial intelligence and moral rights. Jusletter IT 21. February 2019. https://jusletter-it.weblaw.ch/en/issues/2019/IRIS/machines_-attributio_a400b1d060.html__ONCE&login=false.
Niebla Zatarain JM (2018) A similarity assessment in copyright works: the insertion of intelligent technology to provide certainty to rights holders and the public sector. Eur J L Tech 9:1–28
Ory S, Sorge C (2019) Schöpfung durch Künstliche Intelligenz? NJW 72:710–713
Paulius Č, Jurgita G, Gintarė S (2017) IT is possible to grant legal personality to artificial intelligence software systems? Comp L Secur Rev 33:685–699
Perry M, Margoni T (2010) From music tracks to Google maps: who owns computer-genereated works? Comp L Secur Rev 26:621–629
Pettenati LA (2000) Moral rights of artists in an international marketplace. Pace Int L Rev 12:425–450
Ricketson S (1991) People or machines. The Berne convention and the changing concept of authorship. Colum- VLA J L- Arts 16:21–22
Ricketson S, Ginsburg J (2006) International copyright and neighbouring rights, 2nd edn, vol I. Oxford Univ. Press, Oxford
Rigamonti CP (2006) Deconstructing moral rights. Harv Int L J 47:353–412
Rigamonti CP (2007) The conceptual transformation of moral rights. Am J Comp L 55:67–122
Rosati E (2015) Just a laughing matter? Why the decision in Deckmyn is broader than parody. Common Market L Rev 52:511–529
Rosenthal Kwall R (2010) The soul of creativity. Stanford University Press, Stanford
Rushton M (1998) The moral rights of artists: Droit Moral ou Droit Pécuniaire? J Cultural Econ 22:15–32
Samuelson P (1986) Allocating ownership rights in computer-generated works. U Pittsburgh L Rev 47:1185–1228
Schafer B, Komuves D, Niebla Zatarain JM, Diver L (2015) A fourth law of robotics? Copyright and the law and ethics of machine co-production. Artif Intell Law 23:217–240
Schére E (2018) Where it the Morality? Moral rights in international intellectual property and trade Law. Fordham Int L J 41:773–784
Schmitt S, Hudson JJ et al. (2018) Kickstarting deep reinforcement learning. https://arxiv.org/pdf/1803.03835.pdf. Accessed 14 Jan 2020.
Schönberger D (2018) Deep copyright: up- and downstream questions related to artificial intelligence (AI) and machine learning (ML). Intellectual Property J 24:35–58
Sirvinskaite I (2010) Toward copyright “Europeanification”: European Union moral rights. J Int’l Media Ent L 3:263–288
Solaiman SM (2017) Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif Intell Law 25:155–179
Stewart D (2014) Do androids dream of electric free speech? Visions of the future of copyright, privacy and the first amendment in science fiction. Comm L Pol’y 19:433–461
Surden H (2014) Machine learning and law. Washington L Rev 89:87–115
Tang X (2012) The artist as brand: toward a trademark conception of moral rights. Yale L J 122:218–257
Teubner G (2018) Digitale Rechtssubjekte? Zum privatrechtlichen Status autonomer Softwareagenten. AcP 218:155–205
U.K. House of Lords (1988) Lords Sitting of 25 February 1988 HL Deb vol 493 col 1305. https://api.parliament.uk/historic-hansard/lords/1988/feb/25/copyright-designs-and-patents-bill-hl#column_1305. Accessed 14 Jan 2020.
United Kingdom (1988) Copyright, designs and patents Act 1988, c. 48. https://www.legislation.gov.uk/ukpga/1988/48/contents. Accessed 14 Jan 2020
U.S. Copyright Office (2017) Compendium of US. Copyright Office Practices, 3rd edn. https://www.copyright.gov/comp3/docs/compendium.pdf. Accessed 14 Jan 2020
U.S. Copyright Office (2019) Authors, attribution, and integrity: examining moral rights in the United States. https://www.copyright.gov/policy/moralrights/full-report.pdf. Accessed 14 Jan 2020
U.S. Court of Appeals 2nd Circuit (1995) Carter v. Helmsley-Spear Inc., 71 F.3d 77.
U.S. Court of Appeals 1st Circuit (2010) Mass. Museum of Contemporary Art Found., Inc. v. Buchel 593 F.3d 38
U.S. District Court Northern District of California (2016) Naruto v. Slater Case No. 15-cv-04324-WHO, U.S. Dist. LEXIS 11041
U.S. Supreme Court (1991) Feist Publ’n, Inc. v. Rural Telephone Service Comp., Inc. 499 U.S. 340
Vertinsky L, Rice TM (2002) Thinking about thinking machines: implications of machine inventors for patent law. BUJ Sci Tech L 8:574–613
Victor D (2016): Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk., The New York Times, March 24 2017. https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html. Accessed 15 Jan 2020
Von Lewinski S, Walter M (2010) Rights of authors. In: Von Lewinski S, Walter M (eds) European copyright law. Oxford University Press, Oxford
Walter MM (2008) Österreichisches Urheberrecht, vol I. Medien und Recht, Vienna
WIPO (1979), Berne convention for the protection of literary and artistic works, September 9, 1886, revised at Paris on July 24, 1971 and amended in 1979, S. Treaty Doc. No. 99–27, (1986). https://wipolex.wipo.int/en/text/283698. Accessed 14 Jan 2020
WIPO (1991) Committee of experts on a possible protocol to the Berne convention for the protection of literary and artistic works, questions concerning a possible protocol to the Berne convention part I, Doc. No. BCP/CE/I/2.
WTO (1994) Agreement on trade-related aspects of intellectual property rights, Apr. 15, 1994, marrakesh agreement establishing the world trade organization, Annex 1C, 1869 U.N.T.S. 299, 33 I.L.M. 1197.
Wu AJ (1997) From video games to artificial intelligence: assigning copyright ownership to works generated by increasingly sophisticated computer programs. AIPLA QJ 25:131
Yanisky-Ravid S (2017) Generating rembrandt: artificial intelligence, copyright, and accountability in the 3A Era—the human-like authors are already here—a new model. Mich St L Rev 659–726
Yanisky-Ravid S, Velez- Hernandez LA (2018) Copyrightability of artworks produced by creative robots and originality: the formality-objective model. Minn J L Sci Tech 19:1–53
Yu R (2017) The machine author: What level of copyright protection is appropriate for fully independent computer-generated works? U Pa L Rev 165:1245–1270
Acknowledgements
Open access funding provided by University of Vienna.
Funding
Martin Miernicki: Not applicable; Irene Ng (Huang Ying): This research is supported by the National Research Foundation, Singapore under its Emerging Areas Research Projects (EARP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
Author information
Authors and Affiliations
Contributions
This paper was written in collaboration of the two authors indicated above.
Corresponding author
Ethics declarations
Conflict of interest
The author(s) declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Miernicki, M., Ng (Huang Ying), I. Artificial intelligence and moral rights. AI & Soc 36, 319–329 (2021). https://doi.org/10.1007/s00146-020-01027-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-020-01027-6