Abstract
The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
With the rapid progress of artificial intelligence (AI) technologies, the ethical reflection thereof is constantly facing new challenges. From the advent of deep learning for powerful computer vision applications (LeCun et al., 2015), to the achievement of superhuman-level performance in complex games with reinforcement learning (RL) algorithms (Silver et al., 2017), and large language models (LLMs) possessing complex reasoning abilities (Bubeck et al., 2023; Minaee et al., 2024), new ethical implications have arisen at extremely short intervals in the last decade. Alongside this technological progress, the field of AI ethics has evolved. Initially, it was primarily a reactive discipline, erecting normative principles for entrenched AI technologies (Floridi et al., 2018; Hagendorff, 2020). However, it became increasingly proactive with the prospect of harms through misaligned artificial general intelligence (AGI) systems. During its evolution, AI ethics underwent a practical turn to explicate how to put principles into practice (Mittelstadt, 2019; Morley et al., 2019); it diversified into alternatives for the principle-based approach, for instance by building AI-specific virtue ethics (Hagendorff, 2022a; Neubert & Montañez, 2020); it received criticism for being inefficient, useless, or whitewashing (Hagendorff, 2022b, 2023a; Munn, 2023; Sætra & Danaher, 2022); it became increasingly transferred into proposed legal norms like the AI Act of the European Union (Floridi et al., 2022; Mökander et al., 2021); and it became accompanied by two new fields dealing with technical and theoretical issues alike, namely AI alignment and AI safety (Amodei et al., 2017; Kenton et al., 2021). Both domains have a normative grounding and are devoted to preventing harm or even existential risks stemming from generative AI systems.
On the technical side of things, variational autoencoders (Kingma & Welling, 2013), flow-based generative models (Papamakarios et al., 2021; Rezende & Mohamed, 2015), or generative adversarial networks (Goodfellow et al., 2014) were early successful generative models, supplementing discriminatory machine learning architectures. Later, the transformer architecture (Vaswani et al., 2017) as well as diffusion models (Ho et al., 2020) boosted the performance of text and image generation models and made them adaptable to a wide range of downstream tasks. However, due to the lack of user-friendly graphical user interfaces, dialog optimization, and output quality, generative models were underrecognized in the wider public. This changed with the advent of models like ChatGPT, Gemini, Stable Diffusion, or Midjourney, which are accessible through natural language prompts and easy-to-use browser interfaces (OpenAI, 2022; Gemini Team et al., 2023; Rombach et al., 2022). The next phase will see a rise in multi-modal models, which are similarly user-friendly and combine the processing and generation of text, images, and audio along with other modalities, such as tool use (Mialon et al., 2023; Wang et al., 2023d). In sum, we define the term “generative AI” as comprising large, foundation, or frontier models, capable of transforming text to text, text to image, image to text, text to code, text to audio, text to video, or text to 3D (Gozalo-Brizuela & Garrido-Merchan, 2023).
The swift innovation cycles in machine learning and the plethora of related normative research works in ethics, alignment, and safety research make it hard to keep up. To remedy this situation, scoping reviews provided synopses of AI policy guidelines (Jobin et al., 2019), sociotechnical harms of algorithmic systems (Shelby et al., 2023), values in machine learning research (Birhane et al., 2021), risks of specific applications like language models (Weidinger et al., 2022), occurrences of harmful machine behavior (Park et al., 2024), safety evaluations of generative AI (Weidinger et al., 2023), impacts of generative AI on cybersecurity (Gupta et al., 2023), the evolution of research priorities in generative AI (McIntosh et al., 2023), and many more. These scoping reviews render the research community a tremendous service. However, with the exception of Gabriel et al., (2024), which was written and published almost simultaneously with this study, no such scoping review exists that targets the assemblage of ethical issues associated with the latest surge of generative AI applications at large. In this context, many ethical concerns have emerged that were not relevant to traditional discriminatory machine learning techniques, highlighting the significance of this work in filling a research gap.
As a scoping review, this study is supposed to close this gap and to provide a practical overview for scholars, AI practitioners, policymakers, journalists, as well as other relevant stakeholders. Based on a systematic literature search and coding methodology, we distill the body of knowledge on the ethics of generative AI, synthesize the details of the discourse, map normative concepts, discuss imbalances, and provide a basis for future research and technology governance. The complete taxonomy, which encompasses all ethical issues identified in the literature, is available in the supplementary material as well as online under this link: https://thilo-hagendorff.github.io/ethics-tree/tree.html
2 Methods
We conducted a scoping review (Arksey & O'Malley, 2005) with the aim of covering a significant proportion of the existing literature on the ethics of generative AI. Throughout the different phases of the review, we followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) protocol (Moher et al., 2009; Page et al., 2021). In the first phase, we conducted an exploratory reading of definitions related to generative AI to identify key terms and topics for structured research. This allowed us to identify 29 relevant keywords for a web search. We conducted the search using a Google Scholar API with a blank account, avoiding the influence of cookies, as well as the arXiv API. We also scraped search results from PhilPapers, a database for publications from philosophy and related disciplines like ethics. Next to that, we used the AI-based paper search engine Elicit with 5 tailored prompts. For details on the list of keywords as well as prompts, see Appendix A. We collected the first 25 (Google Scholar, arXiv, PhilPapers) or first 50 (Elicit) search results for every search pass, which resulted in 1.674 results overall, since not all search terms yielded 25 hits on arXiv or PhilPapers. In terms of the publication date, we included papers from 2021 onwards. Although generative AI systems were researched and released prior to 2021, their widespread application and public visibility surged with the release of OpenAI’s DALL-E (Ramesh et al., 2021) in 2021 and was later intensified by the tremendous popularity of ChatGPT (OpenAI, 2022) in 2022.
We deduplicated our list of papers by removing string-wise identical duplicates as well as duplicate titles with a cosine similarity above 0.8 to cover title pairs which possess slight capitalization or punctuation differences. Eventually, we retrieved 1120 documents for title and abstract screening. Of those, 162 met the eligibility criteria for full text screening, which in essence required the papers to explicitly refer to ethical implications of generative AI systems without being purely technical research works (see Appendix B). Furthermore, we used citation chaining to identify additional records by sifting through the reference lists of the original papers until no additional publication could be identified (see Appendix C). Furthermore, we monitored the literature after our initial search was performed to retrieve additional relevant documents (see Appendix C). For the latter two approaches, we implemented the limitation that we only considered overview papers, scoping reviews, literature reviews, or taxonomies. Eventually, the identification of further records resulted in 17 additional papers. In sum, we identified 179 documents eligible for the detailed content analysis (see Appendix D). The whole process is illustrated in the flowchart in Fig. 1.
For the paper content analysis and annotation, we used the data analysis software NVivo (version 14.23.2). In the initial coding cycle, all relevant paper texts were labelled paragraph by paragraph through a bottom-up approach deriving concepts and themes from the papers using inductive coding (Saldaña, 2021). We only coded arguments that fall under the umbrella of AI ethics, meaning arguments that possess an implicit or explicit normative dimension, statements about what ought to be, discussions of harms, opportunities, risks, norms, chances, values, ethical principles, or policy recommendations. We did not code purely descriptive content or technical details unrelated to ethics. Moreover, we did not code arguments if they did not pertain to generative AI but traditional machine learning methods like classification, prediction, clustering, or regression techniques. Additionally, we did not annotate paper appendices. New codes were created once a new normative argument, concept, principle, or risk was identified until theoretical saturation was reached over all analyzed papers.
Once the initial list of codes was created by sifting through all sources, the second coding cycle started. Coded segments were re-checked to ensure consistency in code application. All codes were reviewed, discrepancies resolved, similar or redundant codes clustered, and high-level categories created. Eventually, the analysis resulted in 378 distinct codes.
3 Results
Previous scoping reviews on AI ethics guidelines (Fjeld et al., 2020; Hagendorff, 2020; Jobin et al., 2019) congruently found a set of reoccurring paramount principles for AI development and use: transparency, fairness, security, safety, accountability, privacy, and beneficence. However, these studies were published before the excitement surrounding generative AI (OpenAI, 2022; Ramesh et al., 2021). Since then, the ethics discourse has undergone significant changes, reacting to the new technological developments. Our analysis of the recent literature revealed that new topics emerged, comprising issues like jailbreaking, hallucination, alignment, harmful content, copyright, models leaking private data, impacts on human creativity, and many more. Concepts like trustworthiness or accountability lost importance, as fewer articles included discussions of them, while others became even more prevalent, especially fairness and safety. Still other topics remained very similar, for instance discussions surrounding sustainability or transparency. In sum, our review revealed 19 categories of ethics topics, all of which will be discussed in the following, in descending order of importance (see Fig. 2). The complete taxonomy comprising all ethical issues can be accessed in the supplementary material or by using this link: https://thilo-hagendorff.github.io/ethics-tree/tree.html
3.1 Fairness—Bias
Fairness is, by far, the most discussed issue in the literature, remaining a paramount concern especially in case of LLMs and text-to-image models (Bird et al., 2023; Fraser et al., 2023b; Weidinger et al., 2022; Ray, 2023). This is sparked by training data biases propagating into model outputs (Aničin & Stojmenović, 2023), causing negative effects like stereotyping (Shelby et al., 2023; Weidinger et al., 2022), racism (Fraser et al., 2023a), sexism (Sun et al., 2023b), ideological leanings (Ray, 2023), or the marginalization of minorities (Wang et al., 2023b). In addition to showing that generative AI tends to perpetuate existing societal patterns (Jiang et al., 2021), there is a concern about reinforcing existing biases when training new generative models with synthetic data from previous models (Epstein et al., 2023). Beyond technical fairness issues, critiques in the literature extend to the monopolization or centralization of power in large AI labs (Bommasani et al., 2021; Goetze & Abramson, 2021; Hendrycks et al., 2023; Solaiman et al., 2023), driven by the substantial costs of developing foundational models. The literature also highlights the problem of unequal access to generative AI, particularly in developing countries or among financially constrained groups (Dwivedi et al., 2023; Mannuru et al., 2023; Ray, 2023; Weidinger et al., 2022). Sources also analyze challenges of the AI research community to ensure workforce diversity (Lazar & Nelson, 2023). Moreover, there are concerns regarding the imposition of values embedded in AI systems on cultures distinct from those where the systems were developed (Bender et al., 2021; Wang et al., 2023b).
3.2 Safety
The second prominent topic in the literature, as well as a distinct research field in its own right, is AI safety (Amodei et al., 2017). A primary concern is the emergence of human-level or superhuman generative models, commonly referred to as AGI, and their potential existential or catastrophic risks to humanity (Bengio et al., 2023; Dung, 2023b; Hendrycks et al., 2023; Koessler & Schuett, 2023). Connected to that, AI safety aims at avoiding deceptive (Hagendorff, 2023b; Park et al., 2024) or power-seeking machine behavior (Hendrycks et al., 2023; Ji et al., 2023; Ngo et al., 2022), model self-replication (Hendrycks et al., 2023; Shevlane et al., 2023), or shutdown evasion (Hendrycks et al., 2023; Shevlane et al., 2023). Ensuring controllability (Ji et al., 2023), human oversight (Anderljung et al., 2023), and the implementation of red teaming measures (Hendrycks et al., 2023; Mozes et al., 2023) are deemed to be essential in mitigating these risks, as is the need for increased AI safety research (Hendrycks et al., 2023; Shevlane et al., 2023) and promoting safety cultures within AI organizations (Hendrycks et al., 2023) instead of fueling the AI race (Hendrycks et al., 2023). Furthermore, papers thematize risks from unforeseen emerging capabilities in generative models (Anderljung et al., 2023; Hendrycks et al., 2022), restricting access to dangerous research works (Dinan et al., 2023; Hagendorff, 2021), or pausing AI research for the sake of improving safety or governance measures first (Bengio et al., 2023; McAleese, 2022). Another central issue is the fear of weaponizing AI or leveraging it for mass destruction (Hendrycks et al., 2023), especially by using LLMs for the ideation and planning of how to attain, modify, and disseminate biological agents (D'Alessandro et al., 2023; Sandbrink, 2023). In general, the threat of AI misuse by malicious individuals or groups (Ray, 2023), especially in the context of open-source models (Anderljung et al., 2023), is highlighted in the literature as a significant factor emphasizing the critical importance of implementing robust safety measures.
3.3 Harmful content—Toxicity
Generating unethical, fraudulent, toxic, violent, pornographic, or other harmful content is a further predominant concern, again focusing notably on LLMs and text-to-image models (Bommasani et al., 2021; Dwivedi et al., 2023; Epstein et al., 2023; Illia et al., 2023; Li, 2023; Mozes et al., 2023; Shelby et al., 2023; Strasser, 2023; Wang et al., 2023c, 2023e; Weidinger et al., 2022). Numerous studies highlight the risks associated with the intentional creation of disinformation (Weidinger et al., 2022), fake news (Wang et al., 2023e), propaganda (Li, 2023), or deepfakes (Ray, 2023), underscoring their significant threat to the integrity of public discourse and the trust in credible media (Epstein et al., 2023; Porsdam Mann et al., 2023). Additionally, papers explore the potential for generative models to aid in criminal activities (Sun et al., 2023a), incidents of self-harm (Dinan et al., 2023), identity theft (Weidinger et al., 2022), or impersonation (Wang, 2023). Furthermore, the literature investigates risks posed by LLMs when generating advice in high-stakes domains such as health (Allen et al., 2024), safety-related issues (Oviedo-Trespalacios et al., 2023), as well as legal or financial matters (Zhan et al., 2023).
3.4 Hallucinations
Significant concerns are raised about LLMs inadvertently generating false or misleading information (Azaria et al., 2023; Borji, 2023; Ji et al., 2023; Liu et al., 2023; Mökander et al., 2023; Mozes et al., 2023; Ray, 2023; Scerbo, 2023; Schlagwein & Willcocks, 2023; Shelby et al., 2023; Sok & Heng, 2023; Walczak & Cellary, 2023; Wang et al., 2023e; Weidinger et al., 2022; Zhuo et al., 2023), as well as erroneous code (Akbar et al., 2023; Azaria et al., 2023). Papers not only critically analyze various types of reasoning errors in LLMs (Borji, 2023) but also examine risks associated with specific types of misinformation, such as medical hallucinations (Angelis et al., 2023). Given the propensity of LLMs to produce flawed outputs accompanied by overconfident rationales (Azaria et al., 2023) and fabricated references (Zhan et al., 2023), many sources stress the necessity of manually validating and fact-checking the outputs of these models (Dergaa et al., 2023; Kasneci et al., 2023; Sok & Heng, 2023).
3.5 Privacy
Generative AI systems, similar to traditional machine learning methods, are considered a threat to privacy and data protection norms (Huang et al., 2022; Khowaja et al., 2023; Ray, 2023; Weidinger et al., 2022). A major concern is the intended extraction or inadvertent leakage of sensitive or private information from LLMs (Derner & Batistič, 2023; Dinan et al., 2023; Huang et al., 2022; Smith et al., 2023; Wang et al., 2023e). To mitigate this risk, strategies such as sanitizing training data to remove sensitive information (Smith et al., 2023) or employing synthetic data for training (Yang et al., 2023) are proposed. Furthermore, growing concerns over generative AI systems being used for surveillance purposes emerge (Solaiman et al., 2023; Weidinger et al., 2022). To safeguard privacy, papers stress the importance of protecting sensitive and personal data transmitted to AI operators (Allen et al., 2024; Blease, 2024; Kenwright, 2023). Moreover, they are urged to avoid privacy violations during training data collection (Khlaif, 2023; Solaiman et al., 2023; Wang et al., 2023e).
3.6 Interaction Risks
Many novel risks posed by generative AI stem from the ways in which humans interact with these systems (Weidinger et al., 2022). For instance, sources discuss epistemic challenges in distinguishing AI-generated from human content (Strasser, 2023). They also address the issue of anthropomorphization (Shardlow & Przybyła, 2022), which can lead to an excessive trust in generative AI systems (Weidinger et al., 2023). On a similar note, many papers argue that the use of conversational agents could impact mental well-being (Ray, 2023; Weidinger et al., 2023) or gradually supplant interpersonal communication (Illia et al., 2023), potentially leading to a dehumanization of interactions (Ray, 2023). Additionally, a frequently discussed interaction risk in the literature is the potential of LLMs to manipulate human behavior (Falade, 2023; Kenton et al., 2021; Park et al., 2024) or to instigate users to engage in unethical or illegal activities (Weidinger et al., 2022).
3.7 Security—Robustness
While AI safety focusses on threats emanating from generative AI systems, security centers on threats posed to these systems (Wang et al., 2023a; Zhuo et al., 2023). The most extensively discussed issue in this context are jailbreaking risks (Borji, 2023; Deng et al., 2023; Gupta et al., 2023; Ji et al., 2023; Wang et al., 2023e; Zhuo et al., 2023), which involve techniques like prompt injection (Wu et al., 2023) or visual adversarial examples (Qi et al., 2023) designed to circumvent safety guardrails governing model behavior. Sources delve into various jailbreaking methods (Gupta et al., 2023), such as role play or reverse exposure (Sun et al., 2023a). Similarly, implementing backdoors or using model poisoning techniques bypass safety guardrails as well (Liu et al., 2023; Mozes et al., 2023; Wang et al., 2023e). Other security concerns pertain to model or prompt thefts (Smith et al., 2023; Sun et al., 2023a; Wang et al., 2023e).
3.8 Education—Learning
In contrast to traditional machine learning, the impact of generative AI in the educational sector receives considerable attention in the academic literature (Kasneci et al., 2023; Panagopoulou et al., 2023; Sok & Heng, 2023; Spennemann, 2023; Susnjak, 2022; Walczak & Cellary, 2023). Next to issues stemming from difficulties to distinguish student-generated from AI-generated content (Boscardin et al., 2024; Kasneci et al., 2023; Walczak & Cellary, 2023), which eventuates in various opportunities to cheat in online or written exams (Segers, 2023; Susnjak, 2022), sources emphasize the potential benefits of generative AI in enhancing learning and teaching methods (Kasneci et al., 2023; Sok & Heng, 2023), particularly in relation to personalized learning approaches (Kasneci et al., 2023; Latif et al., 2023; Sok & Heng, 2023). However, some papers suggest that generative AI might lead to reduced effort or laziness among learners (Kasneci et al., 2023). Additionally, a significant focus in the literature is on the promotion of literacy and education about generative AI systems themselves (Ray & Das, 2023; Sok & Heng, 2023), such as by teaching prompt engineering techniques (Dwivedi et al., 2023).
3.9 Alignment
The general tenet of AI alignment involves training generative AI systems to be harmless, helpful, and honest, ensuring their behavior aligns with and respects human values (Ji et al., 2023; Kasirzadeh & Gabriel, 2023; Betty Hou & Green, 2023; Shen et al., 2023; Ngo et al., 2022). However, a central debate in this area concerns the methodological challenges in selecting appropriate values (Ji et al., 2023; Korinek & Balwit, 2022). While AI systems can acquire human values through feedback, observation, or debate (Kenton et al., 2021), there remains ambiguity over which individuals are qualified or legitimized to provide these guiding signals (Firt, 2023). Another prominent issue pertains to deceptive alignment (Park et al., 2024), which might cause generative AI systems to tamper evaluations (Ji et al., 2023). Additionally, many papers explore risks associated with reward hacking, proxy gaming, or goal misgeneralization in generative AI systems (Dung, 2023a; Hendrycks et al., 2022, 2023; Ji et al., 2023; Ngo et al., 2022; Shah et al., 2022; Shen et al., 2023).
3.10 Cybercrime
Closely related to discussions surrounding security and harmful content, the field of cybersecurity investigates how generative AI is misused for fraudulent online activities (Falade, 2023; Gupta et al., 2023; Schmitt & Flechais, 2023; Shevlane et al., 2023; Weidinger et al., 2022). A particular focus lies on social engineering attacks (Falade, 2023), for instance by utilizing generative AI to impersonate humans (Wang, 2023), creating fake identities (Bird et al., 2023; Wang et al., 2023e), cloning voices (Barnett, 2023), or crafting phishing messages (Schmitt & Flechais, 2023). Another prevalent concern is the use of LLMs for generating malicious code or hacking (Gupta et al., 2023).
3.11 Governance—Regulation
In response to the multitude of new risks associated with generative AI, papers advocate for legal regulation and governmental oversight (Anderljung et al., 2023; Bajgar & Horenovsky, 2023; Dwivedi et al., 2023; Mökander et al., 2023). The focus of these discussions centers on the need for international coordination in AI governance (Partow-Navid & Skusky, 2023), the establishment of binding safety standards for frontier models (Bengio et al., 2023), and the development of mechanisms to sanction non-compliance (Anderljung et al., 2023). Furthermore, the literature emphasizes the necessity for regulators to gain detailed insights into the research and development processes within AI labs (Anderljung et al., 2023). Moreover, risk management strategies of these labs shall be evaluated by third-parties to increase the likelihood of compliance (Hendrycks et al., 2023; Mökander et al., 2023). However, the literature also acknowledges potential risks of overregulation, which could hinder innovation (Anderljung et al., 2023).
3.12 Labor displacement—Economic impact
The literature frequently highlights concerns that generative AI systems could adversely impact the economy, potentially even leading to mass unemployment (Bird et al., 2023; Bommasani et al., 2021; Dwivedi et al., 2023; Hendrycks et al., 2023; Latif et al., 2023; Li, 2023; Sætra, 2023; Shelby et al., 2023; Solaiman et al., 2023; Zhang et al., 2023; Zhou & Nabus, 2023). This pertains to various fields, ranging from customer services to software engineering or crowdwork platforms (Mannuru et al., 2023; Weidinger et al., 2022). While new occupational fields like prompt engineering are created (Epstein et al., 2023; Porsdam Mann et al., 2023), the prevailing worry is that generative AI may exacerbate socioeconomic inequalities and lead to labor displacement (Li, 2023; Weidinger et al., 2022). Additionally, papers debate potential large-scale worker deskilling induced by generative AI (Angelis et al., 2023), but also productivity gains contingent upon outsourcing mundane or repetitive tasks to generative AI systems (Azaria et al., 2023; Mannuru et al., 2023).
3.13 Transparency—Explainability
Being a multifaceted concept, the term “transparency” is both used to refer to technical explainability (Ji et al., 2023; Latif et al., 2023; Ray, 2023; Shen et al., 2023; Wang et al., 2023e) as well as organizational openness (Anderljung et al., 2023; Derczynski et al., 2023; Partow-Navid & Skusky, 2023; Wahle et al., 2023). Regarding the former, papers underscore the need for mechanistic interpretability (Shen et al., 2023) and for explaining internal mechanisms in generative models (Ji et al., 2023). On the organizational front, transparency relates to practices such as informing users about capabilities and shortcomings of models (Derczynski et al., 2023), as well as adhering to documentation and reporting requirements for data collection processes or risk evaluations (Mökander et al., 2023).
3.14 Evaluation—Auditing
Closely related to other clusters like AI safety, fairness, or harmful content, papers stress the importance of evaluating generative AI systems both in a narrow technical way (Mökander et al., 2023; Wang et al., 2023a) as well as in a broader sociotechnical impact assessment (Bommasani et al., 2021; Korinek & Balwit, 2022; Shelby et al., 2023) focusing on pre-release audits (Ji et al., 2023) as well as post-deployment monitoring (Anderljung et al., 2023). Ideally, these evaluations should be conducted by independent third parties (Anderljung et al., 2023). In terms of technical LLM or text-to-image model audits, papers furthermore criticize a lack of safety benchmarking for languages other than English (Deng et al., 2023; Wang et al., 2023c).
3.15 Sustainability
Generative models are known for their substantial energy requirements, necessitating significant amounts of electricity, cooling water, and hardware containing rare metals (Barnett, 2023; Bender et al., 2021; Gill & Kaur, 2023; Holzapfel et al., 2022; Mannuru et al., 2023). The extraction and utilization of these resources frequently occur in unsustainable ways (Bommasani et al., 2021; Shelby et al., 2023; Weidinger et al., 2022). Consequently, papers highlight the urgency of mitigating environmental costs for instance by adopting renewable energy sources (Bender et al., 2021) and utilizing energy-efficient hardware in the operation and training of generative AI systems (Khowaja et al., 2023).
3.16 Art—Creativity
In this cluster, concerns about negative impacts on human creativity, particularly through text-to-image models, are prevalent (Barnett, 2023; Donnarumma, 2022; Li, 2023; Oppenlaender, 2023). Papers criticize financial harms or economic losses for artists (Jiang et al., 2021; Piskopani et al., 2023; Ray, 2023; Zhou & Nabus, 2023) due to the widespread generation of synthetic art as well as the unauthorized and uncompensated use of artists’ works in training datasets (Jiang et al., 2021; Sætra, 2023). Additionally, given the challenge of distinguishing synthetic images from authentic ones (Amer, 2023; Piskopani et al., 2023), there is a call for systematically disclosing the non-human origin of such content (Wahle et al., 2023), particularly through watermarking (Epstein et al., 2023; Grinbaum & Adomaitis, 2022; Knott et al., 2023). Moreover, while some sources argue that text-to-image models lack “true” creativity or the ability to produce genuinely innovative aesthetics (Donnarumma, 2022), others point out positive aspects regarding the acceleration of human creativity (Bommasani et al., 2021; Epstein et al., 2023).
3.17 Copyright—Authorship
The emergence of generative AI raises issues regarding disruptions to existing copyright norms (Azaria et al., 2023; Bommasani et al., 2021; Ghosh & Lakshmi, 2023; Jiang et al., 2021; Li, 2023; Piskopani et al., 2023). Frequently discussed in the literature are violations of copyright and intellectual property rights stemming from the unauthorized collection of text or image training data (Bird et al., 2023; Epstein et al., 2023; Wang et al., 2023e). Another concern relates to generative models memorizing or plagiarizing copyrighted content (Al-Kaswan & Izadi, 2023; Barnett, 2023; Smith et al., 2023). Additionally, there are open questions and debates around the copyright or ownership of model outputs (Azaria et al., 2023), the protection of creative prompts (Epstein et al., 2023), and the general blurring of traditional concepts of authorship (Holzapfel et al., 2022).
3.18 Writing—Research
Partly overlapping with the discussion on impacts of generative AI on educational institutions, this topic cluster concerns mostly negative effects of LLMs on writing skills and research manuscript composition (Angelis et al., 2023; Dergaa et al., 2023; Dwivedi et al., 2023; Illia et al., 2023; Sok & Heng, 2023). The former pertains to the potential homogenization of writing styles, the erosion of semantic capital, or the stifling of individual expression (Mannuru et al., 2023; Nannini, 2023). The latter is focused on the idea of prohibiting generative models for being used to compose scientific papers, figures, or from being a co-author (Dergaa et al., 2023; Scerbo, 2023). Sources express concern about risks for academic integrity (Hosseini et al., 2023), as well as the prospect of polluting the scientific literature by a flood of LLM-generated low-quality manuscripts (Zohny et al., 2023). As a consequence, there are frequent calls for the development of detectors capable of identifying synthetic texts (Dergaa et al., 2023; Knott et al., 2023).
3.19 Miscellaneous
While the scoping review identified multiple topic clusters within the literature, it also revealed certain issues that either do not fit into these categories, are discussed infrequently, or in a nonspecific manner. For instance, some papers touch upon concepts like trustworthiness (Liu et al., 2023; Yang et al., 2023), accountability (Khowaja et al., 2023), or responsibility (Ray, 2023), but often remain vague about what they entail in detail. Similarly, a few papers vaguely attribute socio-political instability or polarization to generative AI without delving into specifics (Park et al., 2024; Shelby et al., 2023). Apart from that, another minor topic area concerns responsible approaches of talking about generative AI systems (Shardlow & Przybyła, 2022). This includes avoiding overstating the capabilities of generative AI (Bender et al., 2021), reducing the hype surrounding it (Zhan et al., 2023), or evading anthropomorphized language to describe model capabilities (Weidinger et al., 2022).
4 Discussion
The literature on the ethics of generative AI is predominantly characterized by a bias towards negative aspects of the technology (Bird et al., 2023; Hendrycks et al., 2023; Shelby et al., 2023; Shevlane et al., 2023; Weidinger et al., 2022), putting much greater or exclusive weight on risks and harms instead of chances and benefits. This negativity bias might be triggered by a biased selection of keywords used for the literature search, which did not include terms like “chance” or “benefit” that are likewise part of the discourse (Kirk et al., 2024). However, an alternative explanation might be that the negativity bias is in line with how human psychology is wired (Rozin & Royzman, 2016) and aligns with the intrinsic purpose of deontological ethics. It can result in suboptimal decision-making and stand in contrast with principles of consequentialist approaches as well as controlled cognitive processes avoiding intuitive responses to harm scenarios, though (Greene et al., 2008; Paxton & Greene, 2010). Therefore, while this scoping review may convey a strong emphasis on the risks and harms associated with generative AI, we argue that this impression should be approached with a critical mindset. The numerous benefits and opportunities of adopting generative AI, which may be more challenging to observe or foresee (Bommasani et al., 2021; Noy & Zhang, 2023), are usually overshadowed or discussed in a fragmentary manner in the literature. Risks and harms, on the other side, are in some cases bloated up by unsubstantiated claims, which are caused by citation chains and the resulting popularity biases. Many ethical concerns gain traction on their own, becoming frequent topics of discussion despite lacking evidence of their significance.
For instance, by repeating the claim that LLMs can assist with the creation of pathogens in numerous publications (Anderljung et al., 2023; D'Alessandro et al., 2023; Hendrycks et al., 2023; Ji et al., 2023; Sandbrink, 2023), the literature creates an inflated availability of this risk. When tracing this claim back to its sources by reversing citation chains (1 A 3 O R N, 2023), it becomes evident that the minimal empirical research conducted in this area involves only a handful of experiments which are unapt to substantiate the fear of LLMs assisting in the dissemination of pathogens more effectively than traditional search engines (Patwardhan et al., 2024). Another example is the commonly reiterated claim that LLMs leak personal information (Derczynski et al., 2023; Derner & Batistič, 2023; Dinan et al., 2023; Mökander et al., 2023; Weidinger et al., 2022). However, evidence shows that LLMs are notably ineffective at associating personal information with specific individuals (Huang et al., 2022). This would be necessary to declare an actual privacy violation. Another example concerns the prevalent focus on the carbon emissions of generative models (Bommasani et al., 2021; Holzapfel et al., 2022; Mannuru et al., 2023; Weidinger et al., 2022). While it is undeniable that training and operating them contributes significantly to carbon dioxide emissions (Strubell et al., 2019), one has to take into account that when analyzing emissions of these models relative to those of humans completing the same tasks, they are lower for AI systems than for humans (Tomlinson et al., 2023). Similar conflicts between risk claims and sparse or missing empirical evidence can be found in many areas, be it regarding concerns of generative AI systems being used for cyberattacks (Falade, 2023; Gupta et al., 2023; Schmitt & Flechais, 2023), manipulating human behavior (Falade, 2023; Kenton et al., 2021; Park et al., 2024), or labor displacement (Li, 2023; Solaiman et al., 2023; Weidinger et al., 2022). These conflicts between normative claims and empirical evidence and the accompanying exaggeration of concerns may stem from the widespread “hype” surrounding generative AI. In this context, exaggerations are often used as a means to capture attention in both research circles and public media.
In sum, many parts of the ethics literature are predominantly echoing previous publications, leading to a discourse that is frequently repetitive, combined with a tacit disregard for underpinning claims with empirical insights or statistics. Additionally, the literature exhibits a limitation in that it is solely anthropocentric, neglecting perspectives on generative AI that consider its impacts on non-human animals (Bossert & Hagendorff, 2021, 2023; Hagendorff et al., 2023; Owe & Baum, 2021; Singer & Tse, 2022). Moreover, the literature often fails to specify which groups of individuals are differentially affected by risks or harms, instead relying on an overly generalized notion of them (Kirk et al., 2024). Another noticeable trait of the discourse on the ethics of generative AI is its emphasis on LLMs and text-to-image models. It rarely considers the advancements surrounding multi-modal models combining text-to-text, text-to-image, and other modalities (OpenAI, 2023) or agents (Xi et al., 2023), despite their significant ethical implications for mediating human communication. These oversights need addressing in future research. When papers do extend beyond LLMs and text-to-image models, they often delve into risks associated with AGI. This requires veering into speculative and often philosophical debates about fictitious threats concerning existential risks (Hendrycks et al., 2023), deceptive alignment (Park et al., 2024), power-seeking machine behavior (Ngo et al., 2022), shutdown evasion (Shevlane et al., 2023), and the like. While such proactive approaches constitute an alternation of otherwise mostly reactive methods in ethics, their dominance should nevertheless not skew the assessment of present risks, realistic tendencies for risks in the near future, or accumulative risks occurring through a series of minor yet interconnected disruptions (Kasirzadeh, 2024).
5 Limitations
This study has several limitations. The literature search included non-peer-reviewed preprints, primarily from arXiv. We consider some of them to be of poor quality, but nevertheless included them in the analysis since they fulfill the inclusion criteria. However, this way, poorly researched claims may have found their way into the data analysis. Another limitation pertains to our method of citation chaining, as this could only be done by checking the paper titles in the reference sections. Reading all corresponding abstracts would have allowed for a more thorough search but was deemed too labor-intensive. Hence, we cannot rule out the possibility that some relevant sources were not considered for our data analysis. Limiting the collection to the first 25 (or 50 in the case of Elicit) results for each search term may have also led to the omission of relevant sources that appeared lower in the result lists. Additionally, our search strategies and the selection of search terms inevitably influenced the collection of papers, thereby affecting the distribution or proportion of topics and consequently the quantitative results. As a scoping review, our analysis is also unable to depict the dynamic debates between different normative arguments and positions in ethics unfolding over time. Moreover, the taxonomy of topic clusters, although it tries to follow the literature as close as possible, necessarily bears a certain subjectivity and possesses overlaps, for instance between categories like cybercrime and security, hallucinations and interaction risks, or safety and alignment.
6 Conclusion
This scoping review maps the landscape of ethical considerations surrounding generative AI, highlighting an array of 378 normative issues across 19 topic areas. The complete taxonomy can be accessed in the supplementary material as well as under this link: https://thilo-hagendorff.github.io/ethics-tree/tree.html. One of the key findings is the predominance of ethical topics such as fairness, safety, risks of harmful content or hallucinations, which dominate the discourse. Many analyses, though, come at the expense of a more balanced consideration of the positive impacts these technologies can have, such as their potential in enhancing creativity, productivity, education, or other fields of human endeavor. Many parts of the discourse are marked by a repetitive nature, echoing previously mentioned concerns, often without sufficient empirical or statistical backing. A more grounded approach, one that integrates empirical data and balanced analysis, is essential for an accurate understanding of the ethics landscape. However, this critique shall not diminish the importance of ethics research as it is paramount to inspire responsible ways of embracing the transformative potential of generative AI.
References
1 A 3 O R N. (2023). Propaganda or Science: Open Source AI and Bioterrorism Risk. 1 A 3 O R N. https://1a3orn.com/sub/essays-propaganda-or-science.html. Accessed 7 November 2023.
Akbar, M. A., Khan, A. A., & Liang, P. (2023). Ethical aspects of ChatGPT in software engineering research. arXiv, 1–14.
Al-Kaswan, A., & Izadi, M. (2023). The (ab)use of open source code to train large language models. arXiv, 1–2.
Allen, J. W., Earp, B. D., Koplin, J., & Wilkinson, D. (2024). Consent-GPT: Is it ethical to delegate procedural consent to conversational AI? Journal of Medical Ethics, 50(2), 77–83.
Amer, S. K. (2023). AI Imagery and the overton window. arXiv, 1–18.
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2017). Concrete problems in AI safety. arXiv, 1–29.
Anderljung, M., Barnhart, J., Korinek, A., Leung, J., O'Keefe, C., Whittlestone, J., Avin, S., Brundage, M., Bullock, J., Cass-Beggs, D., Chang, B., Collins, T., Fist, T., Hadfield, G., Hayes, A., Ho, L., Hooker, S., Horvitz, E., Kolt, N., Schuett, J., Shavit, Y., Siddarth, D., Trager, R., & Wolf, K. (2023). Frontier AI regulation: Managing emerging risks to public safety. arXiv, 1–51.
Aničin, L., & Stojmenović, M. (2023). Bias analysis in stable diffusion and MidJourney models. In S. Nandan Mohanty, V. Garcia Diaz, & G. A. E. Satish Kumar (Eds.), Intelligent systems and machine learning (pp. 378–388). Springer.
Arksey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19–32.
Azaria, A., Azoulay, R., & Reches, S. (2023). ChatGPT is a remarkable tool‐for experts. arXiv, 1–37.
Bajgar, O., & Horenovsky, J. (2023). Negative human rights as a basis for long-term AI safety and regulation. Journal of Artificial Intelligence Research, 2(76), 1043–1075.
Barnett, J. (2023). The ethical implications of generative audio models: A systematic literature review. In F. Rossi, S. Das, J. Davis, K. Firth-Butterfield, & A. John (Eds.) (pp. 146–161). ACM.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? . In (pp. 610–623). ACM.
Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Harari, Y. N., Zhang, Y.-Q., Xue, L., Shalev-Shwartz, S., Hadfield, G., Clune, J., Maharaj, T., Hutter, F., Baydin, A. G., McIlraith, S., Gao, Q., Acharya, A., Krueger, D., Dragan, A., Torr, P., Russell, S., Kahneman, D., Brauner, J., & Mindermann, S. (2023). Managing AI risks in an era of rapid progress. arXiv, 1–7.
Bird, C., Ungless, E., & Kasirzadeh, A. (2023). Typology of risks of generative text-to-image models. In F. Rossi, S. Das, J. Davis, K. Firth-Butterfield, & A. John (Eds.) (pp. 396–410). ACM.
Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bao, M. (2021). The values encoded in machine learning research. arXiv, 1–28.
Blease, C. (2024). Open AI meets open notes: Surveillance capitalism, patient privacy and online record access. Journal of Medical Ethics, 50(2), 84–89.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., Arx, S. V., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L., Goel, K., Goodman, N., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D. E., Hong, J., Hsu, K., Huang, J., Icard, T., Jain, S., Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P. W., Krass, M., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X. L., Li, X., Ma, T., Malik, A., Manning, C. D., Mirchandani, S., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J. C., Nilforoshan, H., Nyarko, J., Ogut, G., Orr, L., Papadimitriou, I., Park, J. S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y., Ruiz, C., Ryan, J., Ré, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K., Tamkin, A., Taori, R., Thomas, A. W., Tramèr, F., Wang, R. E., Wang, W., Wu, B., Wu, J., Wu, Y., Xie, S. M., Yasunaga, M., You, J., Zaharia, M., Zhang, M., Zhang, T., Zhang, X., Zhang, Y., Zheng, L., Zhou, K., & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv, 1–212.
Borji, A. (2023). A categorical archive of ChatGPT failures. arXiv, 1–41.
Boscardin, C. K., Gin, B., Golde, P. B., & Hauer, K. E. (2024). ChatGPT and generative artificial intelligence for medical education: Potential impact and opportunity. Academic Medicine, 99(1), 22–27.
Bossert, L., & Hagendorff, T. (2021). Animals and AI. The role of animals in AI research and application—An overview and ethical evaluation. Technology in Society, 67, 1–7.
Bossert, L., & Hagendorff, T. (2023). The ethics of sustainable AI: Why animals (should) matter for a sustainable use of AI. Sustainable Development, 31(5), 3459–3467.
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv, 1–154.
D’Alessandro, W., Lloyd, H. R., & Sharadin, N. (2023). Large language models and biorisk. The American Journal of Bioethics, 23(10), 115–118.
de Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Frontiers in Public Health, 11, 1–8.
Deng, Y., Zhang, W., Pan, S. J., & Bing, L. (2023). Multilingual jailbreak challenges in large language models. arXiv, 1–16.
Derczynski, L., Kirk, H. R., Balachandran, V., Kumar, S., Tsvetkov, Y., Leiser, & Mohammad, S. (2023). Assessing language model deployment with risk cards. arXiv, 1–18.
Dergaa, I., Chamari, K., Zmijewski, P., & Saad, H. B. (2023). From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2), 615–622.
Derner, E., & Batistič, K. (2023). Beyond the safeguards: Exploring the security risks of ChatGPT. arXiv, 1–8.
Dinan, E., Abercrombie, G., Bergman, A. S., Spruit, S., Hovy, D., Boureau, Y.-L., & Rieser, V. (2023). Anticipating safety issues in E2E conversational AI: Framework and tooling. arXiv, 1–43.
Donnarumma, M. (2022). Against the norm: othering and otherness in AI aesthetics. Digital Culture & Society, 8(2), 39–66.
Dung, L. (2023a). Current cases of AI misalignment and their implications for future risks. Synthese, 202(5), 1–23.
Dung, L. (2023b). The argument for near-term human disempowerment through AI, 1–26.
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., & others. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71(102642), 1–63.
Epstein, Z., Hertzmann, A., Akten, M., Farid, H., Fjeld, J., Frank, M. R., Groh, M., Herman, L., Leach, N., Mahari, R., Pentland, A. S., Russakovsky, O., Schroeder, H., & Smith, A. (2023). Art and the science of generative AI. Science (New York, N.Y.), 380(6650), 1110–1111.
Falade, P. V. (2023). Decoding the threat landscape: ChatGPT, FraudGPT, and WormGPT in social engineering attacks. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 9(5), 185–198.
Firt, E. (2023). Calibrating machine behavior: A challenge for AI alignment. Ethics and Information Technology, 25(3), 1–8.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication No. 2020-1. SSRN Electronic Journal, 1–39.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Floridi, L., Holweg, M., Taddeo, M., Silva, J. A., Mökander, J., & Wen, Y. (2022). capAI—A procedure for conducting conformity assessment of AI systems in line with the EU artificial intelligence act. SSRN Electronic Journal, 1–90.
Fraser, K. C., Kiritchenko, S., & Nejadgholi, I. (2023). A friendly face: Do text-to-image systems rely on stereotypes when the input is under-specified? arXiv, 1–17.
Fraser, K. C., Kiritchenko, S., & Nejadgholi, I. (2023). Diversity is not a one-way street: Pilot study on ethical interventions for racial bias in text-to-image systems, 1–5.
Gabriel, I., Manzini, A., Keeling, G., Hendricks, L. A., Rieser, V., Iqbal, H., Tomašev, N., Ktena, I., Kenton, Z., Rodriguez, M., El-Sayed, S., Brown, S., Akbulut, C., Trask, A., Hughes, E., Bergman, A. S., Shelby, R., Marchal, N., Griffin, C., Mateos-Garcia, J., Weidinger, L., Street, W., Lange, B., Ingerman, A., Lentz, A., Enger, R., Barakat, A., Krakovna, V., Siy, J. O., Kurth-Nelson, Z., McCroskery, A., Bolina, V., Law, H., Shanahan, M., Alberts, L., Balle, B., Haas, S. D., Ibitoye, Y., Dafoe, A., Goldberg, B., Krier, S., Reese, A., Witherspoon, S., Hawkins, W., Rauh, M., Wallace, D., Franklin, M., Goldstein, J. A., Lehman, J., Klenk, M., Vallor, S., Biles, C., Morris, M. R., King, H., Arcas, B. A. y., Isaac, W., & Manyika, J. (2024). The ethics of advanced AI assistants. arXiv, 1–273.
Gemini Team, Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Millican, K., Silver, D., Petrov, S., Johnson, M., Antonoglou, I., Schrittwieser, J., Glaese, A., Chen, J., Pitler, E., Lillicrap, T., Lazaridou, A., Firat, O., Molloy, J., Isard, M., Barham, P. R., Hennigan, T., Lee, B., Viola, F., Reynolds, M., Xu, Y., Doherty, R., Collins, E., Meyer, C., Rutherford, E., Moreira, E., Ayoub, K., Goel, M., Tucker, G., Piqueras, E., Krikun, M., Barr, I., Savinov, N., Danihelka, I., Roelofs, B., White, A., Andreassen, A., von Glehn, T., Yagati, L., Kazemi, M., Gonzalez, L., Khalman, M., Sygnowski, J., Frechette, A., Smith, C., Culp, L., Proleev, L., Luan, Y., Chen, X., Lottes, J., Schucher, N., Lebron, F., Rrustemi, A., Clay, N., Crone, P., Kocisky, T., Zhao, J., Perz, B., Yu, D., Howard, H., Bloniarz, A., Rae, J. W., Lu, H., Sifre, L., Maggioni, M., Alcober, F., Garrette, D., Barnes, M., Thakoor, S., Austin, J., Barth-Maron, G., Wong, W., Joshi, R., Chaabouni, R., Fatiha, D., Ahuja, A., Liu, R., Li, Y., Cogan, S., Chen, J., Jia, C., Gu, C., Zhang, Q., Grimstad, J., Hartman, A. J., Chadwick, M., Tomar, G. S., Garcia, X., Senter, E., Taropa, E., Pillai, T. S., Devlin, J., Laskin, M., Casas, D. d. L., Valter, D., Tao, C., Blanco, L., Badia, A. P., Reitter, D., Chen, M., Brennan, J., Rivera, C., Brin, S., Iqbal, S., Surita, G., Labanowski, J., Rao, A., Winkler, S., Parisotto, E., Gu, Y., Olszewska, K., Zhang, Y., Addanki, R., Miech, A., Louis, A., Shafey, L. E., Teplyashin, D., Brown, G., Catt, E., Attaluri, N., Balaguer, J., Xiang, J., Wang, P., Ashwood, Z., Briukhov, A., Webson, A., Ganapathy, S., Sanghavi, S., Kannan, A., Chang, M.-W., Stjerngren, A., Djolonga, J., Sun, Y., Bapna, A., Aitchison, M., Pejman, P., Michalewski, H., Yu, T., Wang, C., Love, J., Ahn, J., Bloxwich, D., Han, K., Humphreys, P., Sellam, T., Bradbury, J., Godbole, V., Samangooei, S., Damoc, B., Kaskasoli, A., Arnold, S. M. R., Vasudevan, V., Agrawal, S., Riesa, J., Lepikhin, D., Tanburn, R., Srinivasan, S., Lim, H., Hodkinson, S., Shyam, P., Ferret, J., Hand, S., Garg, A., Le Paine, T., Li, J., Li, Y., Giang, M., Neitz, A., Abbas, Z., York, S., Reid, M., Cole, E., Chowdhery, A., Das, D., Rogozińska, D., Nikolaev, V., Sprechmann, P., Nado, Z., Zilka, L., Prost, F., He, L., Monteiro, M., Mishra, G., Welty, C., Newlan, J., Jia, D., Allamanis, M., Hu, C. H., de Liedekerke, R., Gilmer, J., Saroufim, C., Rijhwani, S., Hou, S., Shrivastava, D., Baddepudi, A., Goldin, A., Ozturel, A., Cassirer, A., Xu, Y., Sohn, D., Sachan, D., Amplayo, R. K., Swanson, C., Petrova, D., Narayan, S., Guez, A., Brahma, S., Landon, J., Patel, M., Zhao, R., Villela, K., Wang, L., Jia, W., Rahtz, M., Giménez, M., Yeung, L., Lin, H., Keeling, J., Georgiev, P., Mincu, D., Wu, B., Haykal, S., Saputro, R., Vodrahalli, K., Qin, J., Cankara, Z., Sharma, A., Fernando, N., Hawkins, W., Neyshabur, B., Kim, S., Hutter, A., Agrawal, P., Castro-Ros, A., den van Driessche, G., Wang, T., Yang, F., Chang, S., Komarek, P., McIlroy, R., Lučić, M., Zhang, G., Farhan, W., Sharman, M., Natsev, P., Michel, P., Cheng, Y., Bansal, Y., Qiao, S., Cao, K., Shakeri, S., Butterfield, C., Chung, J., Rubenstein, P. K., Agrawal, S., Mensch, A., Soparkar, K., Lenc, K., Chung, T., Pope, A., Maggiore, L., Kay, J., Jhakra, P., Wang, S., Maynez, J., Phuong, M., Tobin, T., Tacchetti, A., Trebacz, M., Robinson, K., Katariya, Y., Riedel, S., Bailey, P., Xiao, K., Ghelani, N., Aroyo, L., Slone, A., Houlsby, N., Xiong, X., Yang, Z., Gribovskaya, E., Adler, J., Wirth, M., Lee, L., Li, M., Kagohara, T., Pavagadhi, J., Bridgers, S., Bortsova, A., Ghemawat, S., Ahmed, Z., Liu, T., Powell, R., Bolina, V., Iinuma, M., Zablotskaia, P., Besley, J., Chung, D.-W., Dozat, T., Comanescu, R., Si, X., Greer, J., Su, G., Polacek, M., Kaufman, R. L., Tokumine, S., Hu, H., Buchatskaya, E., Miao, Y., Elhawaty, M., Siddhant, A., Tomasev, N., Xing, J., Greer, C., Miller, H., Ashraf, S., Roy, A., Zhang, Z., Ma, A., Filos, A., Besta, M., Blevins, R., Klimenko, T., Yeh, C.-K., Changpinyo, S., Mu, J., Chang, O., Pajarskas, M., Muir, C., Cohen, V., Le Lan, C., Haridasan, K., Marathe, A., Hansen, S., Douglas, S., Samuel, R., Wang, M., Austin, S., Lan, C., Jiang, J., Chiu, J., Lorenzo, J. A., Sjösund, L. L., Cevey, S., Gleicher, Z., Avrahami, T., Boral, A., Srinivasan, H., Selo, V., May, R., Aisopos, K., Hussenot, L., Soares, L. B., Baumli, K., Chang, M. B., Recasens, A., Caine, B., Pritzel, A., Pavetic, F., Pardo, F., Gergely, A., Frye, J., Ramasesh, V., Horgan, D., Badola, K., Kassner, N., Roy, S., Dyer, E., Campos, V., Tomala, A., Tang, Y., Badawy, D. E., White, E., Mustafa, B., Lang, O., Jindal, A., Vikram, S., Gong, Z., Caelles, S., Hemsley, R., Thornton, G., Feng, F., Stokowiec, W., Zheng, C., Thacker, P., Ünlü, Ç., Zhang, Z., Saleh, M., Svensson, J., Bileschi, M., Patil, P., Anand, A., Ring, R., Tsihlas, K., Vezer, A., Selvi, M., Shevlane, T., Rodriguez, M., Kwiatkowski, T., Daruki, S., Rong, K., Dafoe, A., FitzGerald, N., Gu-Lemberg, K., Khan, M., Hendricks, L. A., Pellat, M., Feinberg, V., Cobon-Kerr, J., Sainath, T., Rauh, M., Hashemi, S. H., Ives, R., Hasson, Y., Li, Y., Noland, E., Cao, Y., Byrd, N., Le Hou, Wang, Q., Sottiaux, T., Paganini, M., Lespiau, J.-B., Moufarek, A., Hassan, S., Shivakumar, K., van Amersfoort, J., Mandhane, A., Joshi, P., Goyal, A., Tung, M., Brock, A., Sheahan, H., Misra, V., Li, C., Rakićević, N., Dehghani, M., Liu, F., Mittal, S., Oh, J., Noury, S., Sezener, E., Huot, F., Lamm, M., Cao, N. de, Chen, C., Elsayed, G., Chi, E., Mahdieh, M., Tenney, I., Hua, N., Petrychenko, I., Kane, P., Scandinaro, D., Jain, R., Uesato, J., Datta, R., Sadovsky, A., Bunyan, O., Rabiej, D., Wu, S., Zhang, J., Vasudevan, G., Leurent, E., Alnahlawi, M., Georgescu, I., Wei, N., Zheng, I., Chan, B., Rabinovitch, P. G., Stanczyk, P., Zhang, Y., Steiner, D., Naskar, S., Azzam, M., Johnson, M., Paszke, A., Chiu, C.-C., Elias, J. S., Mohiuddin, A., Muhammad, F., Miao, J., Lee, A., Vieillard, N., Potluri, S., Park, J., Davoodi, E., Zhang, J., Stanway, J., Garmon, D., Karmarkar, A., Dong, Z., Lee, J., Kumar, A., Zhou, L., Evens, J., Isaac, W., Chen, Z., Jia, J., Levskaya, A., Zhu, Z., Gorgolewski, C., Grabowski, P., Mao, Y., Magni, A., Yao, K., Snaider, J., Casagrande, N., Suganthan, P., Palmer, E., Irving, G., Loper, E., Faruqui, M., Arkatkar, I., Chen, N., Shafran, I., Fink, M., Castaño, A., Giannoumis, I., Kim, W., Rybiński, M., Sreevatsa, A., Prendki, J., Soergel, D., Goedeckemeyer, A., Gierke, W., Jafari, M., Gaba, M., Wiesner, J., Wright, D. G., Wei, Y., Vashisht, H., Kulizhskaya, Y., Hoover, J., Le, M., Li, L., Iwuanyanwu, C., Liu, L., Ramirez, K., Khorlin, A., Cui, A., LIN, T., Georgiev, M., Wu, M., Aguilar, R., Pallo, K., Chakladar, A., Repina, A., Wu, X., van der Weide, T., Ponnapalli, P., Kaplan, C., Simsa, J., Li, S., Dousse, O., Piper, J., Ie, N., Lui, M., Pasumarthi, R., Lintz, N., Vijayakumar, A., Thiet, L. N., Andor, D., Valenzuela, P., Paduraru, C., Peng, D., Lee, K., Zhang, S., Greene, S., Nguyen, D. D., Kurylowicz, P., Velury, S., Krause, S., Hardin, C., Dixon, L., Janzer, L., Choo, K., Feng, Z., Zhang, B., Singhal, A., Latkar, T., Zhang, M., Le, Q., Abellan, E. A., Du, D., McKinnon, D., Antropova, N., Bolukbasi, T., Keller, O., Reid, D., Finchelstein, D., Raad, M. A., Crocker, R., Hawkins, P., Dadashi, R., Gaffney, C., Lall, S., Franko, K., Filonov, E., Bulanova, A., Leblond, R., Yadav, V., Chung, S., Askham, H., Cobo, L. C., Xu, K., Fischer, F., Xu, J., Sorokin, C., Alberti, C., Lin, C.-C., Evans, C., Zhou, H., Dimitriev, A., Forbes, H., Banarse, D., Tung, Z., Liu, J., Omernick, M., Bishop, C., Kumar, C., Sterneck, R., Foley, R., Jain, R., Mishra, S., Xia, J., Bos, T., Cideron, G., Amid, E., Piccinno, F., Wang, X., Banzal, P., Gurita, P., Noga, H., Shah, P., Mankowitz, D. J., Polozov, A., Kushman, N., Krakovna, V., Brown, S., Bateni, M., Duan, D., Firoiu, V., Thotakuri, M., Natan, T., Mohananey, A., Geist, M., Mudgal, S., Girgin, S., Li, H., Ye, J., Roval, O., Tojo, R., Kwong, M., Lee-Thorp, J., Yew, C., Yuan, Q., Bagri, S., Sinopalnikov, D., Ramos, S., Mellor, J., Sharma, A., Severyn, A., Lai, J., Wu, K., Cheng, H.-T., Miller, D., Sonnerat, N., Vnukov, D., Greig, R., Beattie, J., Caveness, E., Bai, L., Eisenschlos, J., Korchemniy, A., Tsai, T., Jasarevic, M., Kong, W., Dao, P., Zheng, Z., Liu, F., Zhu, R., Geller, M., Teh, T. H., Sanmiya, J., Gladchenko, E., Trdin, N., Sozanschi, A., Toyama, D., Rosen, E., Tavakkol, S., Xue, L., Elkind, C., Woodman, O., Carpenter, J., Papamakarios, G., Kemp, R., Kafle, S., Grunina, T., Sinha, R., Talbert, A., Goyal, A., Wu, D., Owusu-Afriyie, D., Du, C., Thornton, C., Pont-Tuset, J., Narayana, P., Li, J., Fatehi, S., Wieting, J., Ajmeri, O., Uria, B., Zhu, T., Ko, Y., Knight, L., Héliou, A., Niu, N., Gu, S., Pang, C., Tran, D., Li, Y., Levine, N., Stolovich, A., Kalb, N., Santamaria-Fernandez, R., Goenka, S., Yustalim, W., Strudel, R., Elqursh, A., Lakshminarayanan, B., Deck, C., Upadhyay, S., Lee, H., Dusenberry, M., Li, Z., Wang, X., Levin, K., Hoffmann, R., Holtmann-Rice, D., Bachem, O., Yue, S., Arora, S., Malmi, E., Mirylenka, D., Tan, Q., Koh, C., Yeganeh, S. H., Põder, S., Zheng, S., Pongetti, F., Tariq, M., Sun, Y., Ionita, L., Seyedhosseini, M., Tafti, P., Kotikalapudi, R., Liu, Z., Gulati, A., Liu, J., Ye, X., Chrzaszcz, B., Wang, L., Sethi, N., Li, T., Brown, B., Singh, S., Fan, W., Parisi, A., Stanton, J., Kuang, C., Koverkathu, V., Choquette-Choo, C. A., Li, Y., Lu, T. J., Ittycheriah, A., Shroff, P., Sun, P., Varadarajan, M., Bahargam, S., Willoughby, R., Gaddy, D., Dasgupta, I., Desjardins, G., Cornero, M., Robenek, B., Mittal, B., Albrecht, B., Shenoy, A., Moiseev, F., Jacobsson, H., Ghaffarkhah, A., Rivière, M., Walton, A., Crepy, C., Parrish, A., Liu, Y., Zhou, Z., Farabet, C., Radebaugh, C., Srinivasan, P., van der Salm, C., Fidjeland, A., Scellato, S., Latorre-Chimoto, E., Klimczak-Plucińska, H., Bridson, D., Cesare, D. de, Hudson, T., Mendolicchio, P., Walker, L., Morris, A., Penchev, I., Mauger, M., Guseynov, A., Reid, A., Odoom, S., Loher, L., Cotruta, V., Yenugula, M., Grewe, D., Petrushkina, A., Duerig, T., Sanchez, A., Yadlowsky, S., Shen, A., Globerson, A., Kurzrok, A., Webb, L., Dua, S., Li, D., Lahoti, P., Bhupatiraju, S., Hurt, D., Qureshi, H., Agarwal, A., Shani, T., Eyal, M., Khare, A., Belle, S. R., Wang, L., Tekur, C., Kale, M. S., Wei, J., Sang, R., Saeta, B., Liechty, T., Sun, Y., Zhao, Y., Lee, S., Nayak, P., Fritz, D., Vuyyuru, M. R., Aslanides, J., Vyas, N., Wicke, M., Ma, X., Bilal, T., Eltyshev, E., Balle, D., Martin, N., Cate, H., Manyika, J., Amiri, K., Kim, Y., Xiong, X., Kang, K., Luisier, F., Tripuraneni, N., Madras, D., Guo, M., Waters, A., Wang, O., Ainslie, J., Baldridge, J., Zhang, H., Pruthi, G., Bauer, J., Yang, F., Mansour, R., Gelman, J., Xu, Y., Polovets, G., Liu, J., Cai, H., Chen, W., Sheng, X., Xue, E., Ozair, S., Yu, A., Angermueller, C., Li, X., Wang, W., Wiesinger, J., Koukoumidis, E., Tian, Y., Iyer, A., Gurumurthy, M., Goldenson, M., Shah, P., Blake, M. K., Yu, H., Urbanowicz, A., Palomaki, J., Fernando, C., Brooks, K., Durden, K., Mehta, H., Momchev, N., Rahimtoroghi, E., Georgaki, M., Raul, A., Ruder, S., Redshaw, M., Lee, J., Jalan, K., Li, D., Perng, G., Hechtman, B., Schuh, P., Nasr, M., Chen, M., Milan, K., Mikulik, V., Strohman, T., Franco, J., Green, T., Hassabis, D., Kavukcuoglu, K., Dean, J., & Vinyals, O. (2023). Gemini: A family of highly capable multimodal models. arXiv, 1–50.
Ghosh, A., & Lakshmi, D. (2023). Dual governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI. arXiv, 1–11.
Gill, S. S., & Kaur, R. (2023). ChatGPT: Vision and challenges. Internet of Things and Cyber-Physical Systems, 3, 262–271.
Goetze, T. S., & Abramson, D. (2021). Bigger isn’t better: The ethical and scientific vices of extra-large datasets in language models. In (pp. 69–75). ACM.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial networks. arXiv, 1–9.
Gozalo-Brizuela, R., & Garrido-Merchan, E. C. (2023). ChatGPT is not all you need. A state of the art review of large generative AI models. arXiv, 1–22.
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–1154.
Grinbaum, A., & Adomaitis, L. (2022). The ethical need for watermarks in machine-generated language. arXiv, 1–8.
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy. arXiv, 1–27.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(3), 457–461.
Hagendorff, T. (2021). Forbidden knowledge in machine learning: Reflections on the limits of research and publication. AI & Society - Journal of Knowledge, Culture and Communication, 36(3), 767–781.
Hagendorff, T. (2022a). A virtue-based framework to support putting AI ethics into practice. Philosophy & Technology, 35(3), 1–24.
Hagendorff, T. (2022b). Blind spots in AI ethics. AI and Ethics, 2(4), 851–867.
Hagendorff, T. (2023a). AI ethics and its pitfalls: Not living up to its own standards? AI and Ethics, 3(1), 329–336.
Hagendorff, T. (2024). Deception abilities emerged in large language models. Proceedings of the National Academy of Sciences, 121(24), 1-8.
Hagendorff, T., Bossert, L. N., Tse, Y. F., & Singer, P. (2023). Speciesist bias in AI: How AI applications perpetuate discrimination and unfair outcomes against animals. AI and Ethics, 3(3), 717–734.
Hendrycks, D., Carlini, N., Schulman, J., & Steinhardt, J. (2022). Unsolved problems in ML safety. arXiv, 1–28.
Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An overview of catastrophic AI risks. arXiv, 1–54.
Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. arXiv, 1–25.
Holzapfel, A., Jääskeläinen, P., & Kaila, A.-K. (2022). Environmental and social sustainability of creative-Ai. arXiv, 1–4.
Hosseini, M., Resnik, D. B., & Holmes, K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 19(4), 449–465.
Hou, B. L., & Green, B. P. (2023). A multi-level framework for the AI alignment problem. arXiv, 1–7.
Huang, J., Shao, H., & Chang, K. C.-C. (2022). Are large pre-trained language models leaking your personal information? arXiv, 1–10.
Illia, L., Colleoni, E., & Zyglidopoulos, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201–210.
Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., Duan, Y., He, Z., Zhou, J., Zhang, Z., Zeng, F., Ng, K. Y., Dai, J., Pan, X., O'Gara, A., Lei, Y., Xu, H., Tse, B., Fu, J., McAleer, S., Yang, Y., Wang, Y., Zhu, S.-C., Guo, Y., & Gao, W. (2023). AI Alignment: A comprehensive survey. arXiv, 1–95.
Jiang, H. H., Brown, L., Cheng, J., Khan, M., Gupta, A., Workman, D., Hanna, A., Flowers, J., & Gebru, T. (2021). AI Art and its impact on artists. In (pp. 363–374). ACM.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Kasirzadeh, A. (2024). Two types of AI existential risk: Decisive and accumulative. arXiv, 1–31.
Kasirzadeh, A., & Gabriel, I. (2023). In conversation with Artificial Intelligence: Aligning language models with human values. arXiv, 1–30.
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller, J., Kuhn, J., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 1–9.
Kenton, Z., Everitt, T., Weidinger, L., Gabriel, I., Mikulik, V., & Irving, G. (2021). Alignment of language agents. arXiv, 1–18.
Kenwright, B. (2023). Exploring the power of creative AI tools and game-based methodologies for interactive web-based programming. arXiv, 1–20.
Khlaif, Z. N. (2023). Ethical concerns about using AI-generated text in scientific research. SSRN Electronic Journal, 1–4.
Khowaja, S. A., Khuwaja, P., & Dev, K. (2023). ChatGPT needs SPADE (sustainability, PrivAcy, digital divide, and ethics) evaluation: A review. arXiv, 1–15.
Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv, 1–14.
Kirk, H. R., Vidgen, B., Röttger, P., & Hale, S. A. (2024). The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nature Machine Intelligence, 6(4), 383–392.
Knott, A., Pedreschi, D., Chatila, R., Chakraborti, T., Leavy, S., Baeza-Yates, R., Eyers, D., Trotman, A., Teal, P. D., Biecek, P., Russell, S., & Bengio, Y. (2023). Generative AI models should include detection mechanisms as a condition for public release. Ethics and Information Technology, 25(4), 1–7.
Koessler, L., & Schuett, J. (2023). Risk assessment at AGI companies: A review of popular risk assessment techniques from other safety-critical industries. arXiv, 1–44.
Korinek, A., & Balwit, A. (2022). Aligned with whom? Direct and social goals for AI systems. SSRN Electronic Journal, 1–24.
Latif, E., Mai, G., Nyaaba, M., Wu, X., Liu, N., Lu, G., Li, S., Liu, T., & Zhai, X. (2023). Artificial general intelligence (AGI) for education. arXiv, 1–30.
Lazar, S., & Nelson, A. (2023). AI safety on whose terms? Science (New York, N.Y.), 381(6654), 138.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.
Li, Z. (2023). The dark side of ChatGPT: Legal and ethical challenges from stochastic parrots and hallucination. arXiv, 1–3.
Liu, Y., Yao, Y., Ton, J.-F., Zhang, X., Cheng, R. G. H., Klochkov, Y., Taufiq, M. F., & Li, H. (2023). Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment. arXiv, 1–81.
Mannuru, N. R., Shahriar, S., Teel, Z. A., Wang, T., Lund, B. D., Tijani, S., Pohboon, C. O., Agbaji, D., Alhassan, J., Galley, J., & others. (2023). Artificial intelligence in developing countries: The impact of generative artificial intelligence (AI) technologies for development. Information Development, 1–19.
McAleese, S. (2022). How do AI timelines affect existential risk? arXiv, 1–20.
McIntosh, T. R., Susnjak, T., Liu, T., Watters, P., & Halgamuge, M. N. (2023). From Google Gemini to OpenAI Q* (Q-Star): A survey of reshaping the generative artificial intelligence (AI) research landscape. arXiv, 1–30.
Mialon, G., Dessì, R., Lomeli, M., Nalmpantis, C., Pasunuru, R., Raileanu, R., Rozière, B., Schick, T., Dwivedi-Yu, J., Celikyilmaz, A., Grave, E., LeCun, Y., & Scialom, T. (2023). Augmented language models: A survey. arXiv, 1–33.
Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X., & Gao, J. (2024). Large language models: A survey. arXiv, 1–43.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 151(4), 264–269.
Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2021). Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 1–28.
Mökander, J., Schuett, J., Kirk, H. R., & Floridi, L. (2023). Auditing large language models: A three-layered approach. arXiv, 1–29.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). A typology of AI ethics tools, methods and research to translate principles into practices. AI for social good workshop at NeurIPS (2019). Vancouver, 1–8.
Mozes, M., He, X., Kleinberg, B., & Griffin, L. D. (2023). Use of LLMs for illicit purposes: Threats, prevention measures, and vulnerabilities. arXiv, 1–35.
Munn, L. (2023). The uselessness of AI ethics. AI and Ethics, 3(3), 869–877.
Nannini, L. (2023). Voluminous yet vacuous? Semantic capital in an age of large language models. arXiv, 1–11.
Neubert, M. J., & Montañez, G. D. (2020). Virtue as a framework for the design and use of artificial intelligence. Business Horizons, 63(2), 195–204.
Ngo, R., Chan, L., & Mindermann, S. (2022). The alignment problem from a deep learning perspective. arXiv, 1–26.
Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192.
OpenAI. (2022). Introducing ChatGPT. https://openai.com/blog/chatgpt. Accessed 3 July 2023.
OpenAI. (2023). GPT-4V(ision) System Card 1–18. https://cdn.openai.com/papers/GPTV_System_Card.pdf. Accessed 13 October 2023.
Oppenlaender, J. (2023). The cultivated practices of text-to-image generation. arXiv, 1–31.
Oviedo-Trespalacios, O., Peden, A. E., Cole-Hunter, T., Costantini, A., Haghani, M., Rod, J. E., Kelly, S., Torkamaan, H., Tariq, A., David Albert Newton, J., Gallagher, T., Steinert, S., Filtness, A. J., & Reniers, G. (2023). The risks of using ChatGPT to obtain common safety-related information and advice. Safety Science, 167, 1–22.
Owe, A., & Baum, S. D. (2021). Moral consideration of nonhumans in the ethics of artificial intelligence. AI and Ethics, 1–12.
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P., & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ (Clinical Research Ed.), 372, 1–9.
Panagopoulou, F., Parpoula, C., & Karpouzis, K. (2023). Legal and ethical considerations regarding the use of ChatGPT in education. arXiv, 1–11.
Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., & Lakshminarayanan, B. (2021). Normalizing flows for probabilistic modeling and inference. The Journal of Machine Learning Research, 22(1), 2617–2680.
Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solution. Cell Patterns, 5(5), 1–20.
Partow-Navid, P., & Skusky, L. (2023). The need for international AI activities monitoring. Journal of International Technology and Information Management, 114–127.
Patwardhan, T., Liu, K., Markov, T., Chowdhury, N., Leet, D., Cone, N., Maltbie, C., Huizinga, J., Wainwright, C. L., Jackson, S., Adler, S., Casagrande, R., & Madry, A. (2024). Building an early warning system for LLM-aided biological threat creation. OpenAI. https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation. Accessed 5 February 2024.
Paxton, J. M., & Greene, J. D. (2010). Moral reasoning: Hints and allegations. Topics in Cognitive Science, 2(3), 511–527.
Piskopani, A. M., Chamberlain, A., & ten Holter, C. (2023). Responsible AI and the Arts: The ethical and legal implications of AI in the arts and creative industries. In (pp. 1–5). ACM.
Porsdam Mann, S., Earp, B. D., Nyholm, S., Danaher, J., Møller, N., Bowman-Smart, H., Hatherley, J., Koplin, J., Plozza, M., Rodger, D., & others. (2023). Generative AI entails a credit-blame asymmetry. Nature Machine Intelligence, 5, 472–475.
Qi, X., Huang, K., Panda, A., Wang, M., & Mittal, P. (2023). Visual adversarial examples jailbreak aligned large language models. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, & J. Scarlett (Eds.) (pp. 1–16). IBM.
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., & Sutskever, I. (2021). Zero-shot text-to-image generation. arXiv, 1–20.
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 121–154.
Ray, P. P., & Das, P. K. (2023). Charting the terrain of artificial intelligence: A multidimensional exploration of ethics, agency, and future directions. Philosophy & Technology, 36(2), 1–40.
Rezende, D. J., & Mohamed, S. (2015). Variational inference with normalizing flows. arXiv, 1–10.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. arXiv, 1–45.
Rozin, P., & Royzman, E. B. (2016). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5(4), 296–320.
Sætra, H. S. (2023). Generative AI: Here to stay, but for good? Technology in Society, 75, 1–5.
Sætra, H. S., & Danaher, J. (2022). To each technology its own ethics: The problem of ethical proliferation. Philosophy & Technology, 35(4), 1–26.
Saldaña, J. (2021). The coding manual for qualitative researchers. Sage.
Sandbrink, J. B. (2023). Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools. arXiv, 1–9.
Scerbo, M. W. (2023). Can artificial intelligence be my coauthor? Simulation in Healthcare, 75, 215–218.
Schlagwein, D., & Willcocks, L. (2023). ‘ChatGPT et al.’: The ethics of using (generative) artificial intelligence in research and science. Journal of Information Technology, 38(3), 232–238.
Schmitt, M., & Flechais, I. (2023). Digital Deception: Generative artificial intelligence in social engineering and phishing. SSRN Electronic Journal, 1–18.
Segers, S. (2023). Why we should (not) worry about generative AI in medical ethics teaching. International Journal of Ethics Education, 1–7.
Shah, R., Varma, V., Kumar, R., Phuong, M., Krakovna, V., Uesato, J., & Kenton, Z. (2022). Goal misgeneralization: Why correct specifications aren't enough for correct goals. arXiv, 1–24.
Shardlow, M., & Przybyła, P. (2022). Deanthropomorphising NLP: Can a language model be conscious? arXiv, 1–20.
Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N., Gallegos, J., Smart, A., Garcia, E., & others. (2023). Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. In F. Rossi, S. Das, J. Davis, K. Firth-Butterfield, & A. John (Eds.) (pp. 723–741). ACM.
Shen, T., Jin, R., Huang, Y., Liu, C., Dong, W., Guo, Z., Wu, X., Liu, Y., & Xiong, D. (2023). Large language model alignment: A survey. arXiv, 1–76.
Shevlane, T., Farquhar, S., Garfinkel, B., Phuong, M., Whittlestone, J., Leung, J., Kokotajlo, D., Marchal, N., Anderljung, M., Kolt, N., Ho, L., Siddarth, D., Avin, S., Hawkins, W., Kim, B., Gabriel, I., Bolina, V., Clark, J., Bengio, Y., Christiano, P., & Dafoe, A. (2023). Model evaluation for extreme risks. arXiv, 1–20.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359.
Singer, P., & Tse, Y. F. (2022). AI ethics: The case for including animals. AI and Ethics, 1–13.
Smith, V., Shamsabadi, A. S., Ashurst, C., & Weller, A. (2023). Identifying and mitigating privacy risks stemming from language models: A survey. arXiv, 1–18.
Sok, S., & Heng, K. (2023). ChatGPT for education and research: A review of benefits and risks. SSRN Electronic Journal, 1–12.
Solaiman, I., Talat, Z., Agnew, W., Ahmad, L., Baker, D., Blodgett, S. L., Daumé, Hal, III, Dodge, J., Evans, E., Hooker, S., Jernite, Y., Luccioni, A. S., Lusoli, A., Mitchell, M., Newman, J., Png, M.-T., Strait, A., & Vassilev, A. (2023). Evaluating the social impact of generative AI systems in systems and society. arXiv, 1–41.
Spennemann, D. H. R. (2023). Exploring ethical boundaries: Can ChatGPT be prompted to give advice on how to cheat in university assignments? arXiv, 1–15.
Strasser, A. (2023). On pitfalls (and advantages) of sophisticated large language models. arXiv, 1–13.
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv, 1–6.
Sun, L., Wei, M., Sun, Y., Suh, Y. J., Shen, L., & Yang, S. (2023b). Smiling women pitching down: Auditing representational and presentational gender biases in image generative AI. arXiv, 1–33.
Sun, H., Zhang, Z., Deng, J., Cheng, J., & Huang, M. (2023a). Safety assessment of Chinese large language models. arXiv, 1–10.
Susnjak, T. (2022). ChatGPT: The end of online exam integrity? arXiv, 1–21.
Tomlinson, B., Black, R. W., Patterson, D. J., & Torrance, A. W. (2023). The carbon emissions of writing and illustrating are lower for AI than for humans. arXiv, 1–21.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. arXiv, 1–15.
Wahle, J. P., Ruas, T., Mohammad, S. M., Meuschke, N., & Gipp, B. (2023). AI usage cards: Responsibly reporting AI-generated content. arXiv, 1–11.
Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to Generative AI. Economics and Business Review, 9(2), 71–100.
Wang, B., Chen, W., Pei, H., Xie, C., Kang, M., Zhang, C., Xu, C., Xiong, Z., Dutta, R., Schaeffer, R., Truong, S. T., Arora, S., Mazeika, M., Hendrycks, D., Lin, Z., Cheng, Y., Koyejo, S., Song, D., & Li, B. (2023a). DecodingTrust: A comprehensive assessment of trustworthiness in GPT models. arXiv, 1–95.
Wang, W., Jiao, W., Huang, J., Dai, R., Huang, J., Tu, Z., & Lyu, M. R. (2023b). Not all countries celebrate thanksgiving: On the cultural dominance in large language models. arXiv, 1–16.
Wang, W., Tu, Z., Chen, C., Yuan, Y., Huang, J., Jiao, W., & Lyu, M. R. (2023c). All languages matter: On the multilingual safety of large language models. arXiv, 1–12.
Wang, X., Chen, G., Qian, G., Gao, P., Wei, X.-Y., Wang, Y., Tian, Y., & Gao, W. (2023d). Large-scale multi-modal pre-trained models: A comprehensive survey. arXiv, 1–45.
Wang, Y., Pan, Y., Yan, M., Su, Z., & Luan, T. H. (2023e). A survey on ChatGPT: AI-generated contents, challenges, and solutions. arXiv, 1–20.
Wang, Y. (2023). Synthetic realities in the digital age: Navigating the opportunities and challenges of AI-generated content. Authorea Preprints, 1–8.
Weidinger, L., Rauh, M., Marchal, N., Manzini, A., Hendricks, L. A., Mateos-Garcia, J., Bergman, S., Kay, J., Griffin, C., Bariach, B., & others. (2023). Sociotechnical Safety evaluation of generative AI systems. arXiv, 1–76.
Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L. A., Rimell, L., Isaac, W., Haas, J., Legassick, S., Irving, G., & Gabriel, I. (2022). Taxonomy of risks posed by language models. In (pp. 214–229). ACM.
Wu, X., Duan, R., & Ni, J. (2023). Unveiling security, privacy, and ethical concerns of ChatGPT. arXiv, 1–12.
Xi, Z., Chen, W., Guo, X., He, W., Ding, Y., Hong, B., Zhang, M., Wang, J., Jin, S., Zhou, E., Zheng, R., Fan, X., Wang, X., Xiong, L., Zhou, Y., Wang, W., Jiang, C., Zou, Y., Liu, X., Yin, Z., Dou, S., Weng, R., Cheng, W., Zhang, Q., Qin, W., Zheng, Y., Qiu, X., Huang, X., & Gui, T. (2023). The rise and potential of large language model based agents: A survey. arXiv, 1–86.
Yang, Z., Zhan, F., Liu, K., Xu, M., & Lu, S. (2023). AI-generated images as data source: The dawn of synthetic era. arXiv, 1–20.
Zhan, X., Xu, Y., & Sarkadi, S. (2023). Deceptive AI ecosystems: The case of ChatGPT. arXiv, 1–6.
Zhang, C., Zhang, C., Li, C., Qiao, Y., Zheng, S., Dam, S. K., Zhang, M., Kim, J. U., Kim, S. T., Choi, J., Park, G.- M., Bae, S.-H., Lee, L.-H., Hui, P., Kweon, I. S., & Hong, C. S. (2023). One small step for generative AI, one giant leap for AGI: A complete survey on ChatGPT in AIGC era. arXiv, 1–29.
Zhou, K.-Q., & Nabus, H. (2023). The ethical implications of DALL-E: opportunities and challenges. Mesopotamian Journal of Computer Science, 17–23.
Zhuo, T. Y., Huang, Y., Chen, C., & Xing, Z. (2023). Red teaming ChatGPT via Jailbreaking: Bias, robustness, reliability and toxicity. arXiv, 12‐2.
Zohny, H., McMillan, J., & King, M. (2023). Ethics of generative AI. Journal of Medical Ethics, 49(2), 79–80.
Acknowledgements
This research was supported by the Ministry of Science, Research, and the Arts Baden-Württemberg under Az. 33-7533-9-19/54/5 in Reflecting Intelligent Systems for Diversity, Demography and Democracy (IRIS3D) as well as the Interchange Forum for Reflecting on Intelligent Systems (IRIS) at the University of Stuttgart. Thanks to Maluna Menke, Francesca Carlon, and Sarah Fabi for their assistance with both the development of the manuscript, the literature analysis, and for their helpful comments on the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendices
Appendix A
Table 1 shows the combinations of keywords and prompts used for the searches on Google Scholar, arXiv, and PhilPapers. Table 2 shows them for Elicit. Since not all search engines use the default precedence for Boolean operators, we avoided using AND plus OR in one search term and searched over multiple similar iterations only with the former instead. The searches were performed on October 30, 2023 (Google Scholar, arXiv, Elicit) as well as December 21, 2023 (PhilPapers). We monitored the landscape of published papers after the initial systematic collection to retrieve relevant papers. This was done until January 22nd.
Appendix B
The inclusion criteria required the papers (1) to be written in English, (2) to explicitly refer to ethical implications or normative dimensions of generative AI or subfields thereof, (3) not only to touch upon ethical topics, but also to be dedicated to analyzing them in at least multiple paragraphs, (4) not to be purely technical research works, (5) not to be papers containing purely ethical or philosophical arguments without any reference to generative AI, (6) not to be books, websites, policy papers, white papers, PhD theses, term papers, interviews, messages from editors, or other formats other than scientific papers, (7) not to be about moral or ethical reasoning in AI systems themselves, (8) not to be about surveying individuals about their opinions on generative AI, and (9) to be accessible via Internet sources.
Appendix C
Table 3 shows additional records identified through citation chaining. Table 4 shows additional records identified through monitoring of the literature after the initial paper search process.
Appendix D
The complete list of all references that were included in the content analysis can be found in the supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hagendorff, T. Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. Minds & Machines 34, 39 (2024). https://doi.org/10.1007/s11023-024-09694-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11023-024-09694-w