1 Introduction

Generative AI (the term stands for “generative artificial intelligence”) has attracted a lot of attention in society, business, and science. This trend has increased since 2018, and the big breakthrough came in 2022. In particular, AI-powered text and image generators are now widely used. A variety of ethical issues arise from this. Image synthesis is the process associated with image generators. Image generators based on generative AI are further referred to as text-to-image synthesis or text-to-image generation (Kang et al. 2023). The term “image synthesis” is reminiscent of voice synthesis, which also raised quite a few ethical questions (Bendel 2017). “Image synthesis” and “image generation” are often used synonymously. However, the first term can also be understood more comprehensively. This paper is exclusively about AI-based image synthesis.

The ethical issues are related, for example, to the origin of the data, the input of the data and information, and the nature of the texts and images. But there are also quite different questions raised by image synthesis. Is a new form of digital, artificial beauty emerging? Do people under pressure have to measure themselves against this digital, artificial beauty? Are new ideals of beauty emerging? Or do image generators support laymen and experts and allow them new possibilities? Do they replace experts and support the IT and AI corporations? These are the kinds of questions that will be asked here and answered to some extent.

So far, there is hardly any scientific literature on this highly relevant topic. From the perspective of ethics, generative AI is primarily viewed in terms of text generators. As an area of application, the focus here is on science (such as medicine) or business (Zohny et al. 2023; Hoi 2023). Instead, moral questions about image generators are mainly raised in the media. For example, a discussion flared up about photorealistic images of young, pretty, barely clothed women. It was asked whether this was associated with sexist ideas or whether—from a completely different perspective—the modelling industry, and its exploitation would become obsolete (Lobe 2023).

This paper first gives an introduction to generative AI, and then to applied ethics in this context. The author then discusses three image generators: DALL-E 2, Stable Diffusion, and Midjourney. He elaborates technical details and basic principles, and compares their similarities and differences. This is followed by a structured, detailed ethical discussion. Not only risks but opportunities are discussed. Each topic is placed in the context of applied ethics. A summary with an outlook rounds off the article.

2 Generative AI

Generative AI (“AI” stands for “artificial intelligence”) is a collective term for AI-based systems that can be used to produce all kinds of results in an apparently professional and creative manner, such as images, video, audio, text, code, 3D models, and simulations (Bendel 2023b). The basis is text- or image-based input from users—so-called prompts. Human skills can be matched or even surpassed. Generative AI can support pupils, students, teachers, office workers, politicians, artists, and scientists, and be part of more complex systems.

Generative AI uses machine learning, especially deep learning, drawing on different data sources and training methods (Bendel 2023b). Reinforcement Learning from Human Feedback (RLHF) can be used to incorporate classification and evaluation by workers. Their feedback is used to train a reward system, which, in turn, to take one example, can train a chatbot. The 2020s saw a veritable explosion of applications. The fact that many tools could be tested by the general public fueled the hype around generative AI. A broad, public, media, and scientific discussion flared up.

ChatGPT can generate texts of all kinds based on text- and image-based prompts, study papers, technical articles, advertising texts, poems, or recipes, and become not only a content generator but also a chatbot (Bendel 2023b). This can be connected to a text-to-speech system and integrated into a social robot, which thus acquires far-reaching natural language capabilities, or into search engines, as Microsoft and Google have done (Lardinois 2023). Image generators such as DALL-E, Stable Diffusion, and Midjourney produce visual content. Music generators like AI Music Generator create sequences of sounds that can be used in music pads, and entire songs. Other systems can develop new drugs or new biological and chemical weapons.

3 AI-based image generators

There are many AI-based image generators available that cannot all be covered here, especially since new products or new versions are constantly emerging. Instead, we will focus on three well-known, popular, and powerful generative AI tools: DALL-E 2, Stable Diffusion, and Midjourney (Borji 2022). They are available for or executable on laptops, tablets, and smartphones. Apps such as Lensa, which convert selfies into avatars, are not discussed. Classic image editing programs are also excluded, although they can play a role in post-processing the results. In addition, image generators can be integrated into image editing programs (as well as image platforms).

DALL-E (released January 2021) and its successor DALL-E 2 (released April 2022) are image generators from OpenAI that can create images from text-based prompts (https://openai.com/product/dall-e-2). The current program—its name evokes the famous surrealist Spanish painter, Salvador Dalí—is based on Generative Pre-trained Transformer 3 (GPT-3), also developed by OpenAI, a language model that can compose texts of all types, multiple-choice questions, etc. DALL-E was developed in conjunction with CLIP (Contrastive Language-Image Pre-training). It creates series of four images arranged side by side and can be downloaded. Photo-realistic and cartoon-like representations are possible, among others. Simple forms of image editing are also offered. The program is very simple to use.

Stable Diffusion, released in August 2022, is another image generator that can create images from text-based prompts (https://stablediffusionweb.com). Inpainting (destroyed or lost parts of a picture should be reconstructed), outpainting (content outside the original image area should be added), and generating image-to-image translations based on a text prompt are also possible. Stable Diffusion uses a latent diffusion model, which is a variant of a deep generative neural network. Photorealistic and other representations are provided. There is a separate field for negative prompts, strengthening this feature. By default, square tiles with four images are produced, each of which can be enlarged and modified. The user can share the images with the Hugging Face community. The program is very simple to use.

Midjourney is an image generator available since July 2022, which can also create images from text-based prompts (https://www.midjourney.com). Version 5.2 was released in June 2023. The program was created by the research institute of the same name, which was founded by David Holz. What makes Midjourney special is that it is embedded in an online community, and each time you call it, you choose a Discord bot to interact with. The output is square tiles with four images, each of which can be enlarged and changed. One can see the results of the other community members and can also use them, e.g., download or modify them. The quality of the results is exceptionally high. The program is very difficult to use and, as conversations have revealed, discourages some interested parties from using it.

Most image generators had problems with the correct representation of hands, feet, ears, and eyes. For example, fingers were placed in the wrong place or limbs were mutilated, or pupils were of different sizes. However, clear progress can be seen here, for example, with Midjourney from version 5 onwards. As own tests of the author have shown, the prompts are interpreted quite differently and with varying accuracy. Some specifications are ignored or incorrectly implemented. Thus, required elements are omitted again and again, certain representations and perspectives are denied. According to the author’s experience, this is especially true for DALL-E and Stable Diffusion. The quality is also quite different for one and the same tool. Particularly with DALL-E, extreme fluctuations result from this.

4 Areas of applied ethics

Ethics is a subdiscipline of philosophy and originated two and a half millennia ago, initially mainly based on the works of Aristotle (Bendel 2019b). The object of ethics is morality. It examines, justifies, and questions what is considered good and evil, or just and unjust. In doing so, it focuses primarily on acting rather than thinking. Applied ethics relates to delineated topical fields and forms certain fields or specialties of ethics. In the present context, information ethics, technology ethics, media ethics, business ethics, and machine ethics are particularly relevant.

Morality (of and in) the information society is the object of information ethics (Bendel 2019b). It researches how we behave, or should behave, in moral terms when offering or using information and communication technologies (ICT), information systems, digital media, AI systems, and robots. It is central to computer, net, and new media ethics, and all of them have to deal with it, considering that all fields of applications are dependent upon ICT and other computer-related technologies. AI ethics can be seen as part of information ethics or as an independent field.

Technology ethics refers to moral questions of the application of technique and technology (Bendel 2019b). It can deal with automotive technology or arms technology as well as nanotechnology or nuclear technology. In the information society, where more and more products contain computer technologies, technology ethics is particularly closely linked to information ethics, and partly merges with it. According to another definition, information ethics is a part of technology ethics. However, its subject area is now very large, so there is much to be said for it as an independent ethical discipline.

The object of media ethics is the morality of media and in media (Bendel 2019b). Both the methods of mass media and social media, and the behaviors of the users of social and digital media, are relevant here. Their role as prosumers (i.e., producers and consumers) is also of interest. Automatisms and manipulations by AI-based technologies and systems move into the focus, linking it closely to information ethics. There are also close ties to business ethics, especially as the media landscape is in transition and economic pressures are strong.

Business ethics is concerned with morality in the economy (Bendel 2019b). The focus is on the individual who produces, acts, leads, and executes as well as consumes, and the company that bears responsibility towards employees, customers, and the environment. In addition, the moral implications of economic processes and systems as well as of globalization and monopolization are of interest (“economic ethics” as a comprehensive term). In the information society, business ethics is closely intertwined with information ethics.

The subject matter of machine ethics is the morality of machines, mostly that of partially autonomous and autonomous systems such as chatbots, voice assistants, certain robots, certain drones, and self-driving cars (Wallach and Allen 2009; Anderson and Anderson 2011; Bendel 2012a, 2019a). The term of morality is discussed quite controversially in this context. However, it can be noted that autonomous systems will increasingly have to make decisions of moral relevance as they become more prevalent in society. This moral decision making can be explicitly reasoned, for instance, in annotated decision trees. This is nothing other than artificial morality.

5 An ethical discussion of image synthesis

In the following section, ethical questions that arise in connection with image synthesis are discussed. This involves a systematization of considerations in the literature as well as the author’s own considerations, whereby earlier work—for example from machine ethics—is continued (Bendel 2019a). Furthermore, the author’s tests and observations in the communities form a basis, although these can hardly be representative, and the AI systems are characterized precisely by the fact that they generate different outputs each time. The topic areas were developed through literature review and a transfer of typical problem areas in applied ethics.

The focus is on an ethical, not a legal discussion, although legality does play a part. The specialized areas of applied ethics involved are technology ethics, information ethics (including AI ethics), media ethics, and business ethics. In addition, within applied ethics, machine ethics is of importance, especially with regard to the meta-rules and the restrictions of image generators. Art ethics could add an interesting element to this discussion; however, it has so far hardly been established within the context of applied ethics (Fenner 2013). Nevertheless, it is mentioned briefly.

The author lists the risks of using image generators as well as the opportunities. The discipline of ethics is not only there to describe and justify a limitation of action, but also an unfolding of the person and the personality. This is the case, for example, when talking about a good life. This can include a facilitation of work as well as enjoying the beauty of images. The order of the topics is more or less random. However, there are groupings, such as those relating to the facilitation of work and the substitution of work, which are dealt with in succession. A list of this kind can never be complete. Thus, topics such as identity theft, erosion of trust in photography and design, or human lack of independence are omitted.

5.1 Copyright infringement and third-party use

With Midjourney (up to version 4) and other image generators, it was repeatedly noticed that images bore watermarks and fonts (Borji 2022). This indicates that image platforms with protected and paid content are exploited, and images are illegally copied and used. In principle, considerable amounts of data are probably read where copyrights and reproduction rights exist. By no means is all content on the Internet free—different licenses are issued, such as Creative Commons licenses, and much material is protected inherently by the mere act of creating a distinctive work.

The use of protected material by generative AI can be problematic from a legal, and an ethical point of view (Smits and Borghuis 2022). For example, an author’s creations are used without the author (or the platform) being remunerated or compensated. In addition, one has no influence on the results of one’s own works. It is possible that they violate one’s own ideas and values, or that they fail to meet one’s own tastes. When it comes to protected material of companies and organizations, they act against their interests. In addition, they also have no influence on the results of the corresponding works. In this way, the competition could, in a certain way, enrich itself unfairly from the achievements of an author.

Information ethics and media ethics are devoted to copyright issues and creative processes surrounding image generators. Business ethics examines the shifts that result from the infringement of rights, including the aspect of abuse through the gratuitous and unlawful use of data. In addition, legal ethics is in play, especially as it involves boundary issues in law.

5.2 Copyright protection

Whether works created with image generators have copyright protection is the subject of numerous expert opinions and the case law of several countries. A March 2023 U.S. guideline specifies when and how much AI may be used in a work for it to still enjoy copyright protection (Copyright Office 2023). According to the agency, users do not have ultimate creative control over how these systems interpret prompts and generate material (Grüner 2023). Instead, according to them, these prompts function more like instructions for a commissioned artist. Accordingly, copyright protection is not possible here.

Works, on the other hand, that have been created with sufficient human participation should receive copyright protection (Grüner 2023). This can include, for example, subsequent editing of AI works with the help of image editing programs such as Adobe Photoshop. However, the modifications must be far-reaching enough for human participation to be sufficient—in other words, something independent must emerge from editing. Curation of AI works can also enjoy copyright protection if the work thus created as a whole can be considered a creative work in its own right (Grüner 2023). This includes the collection, arrangement, and discussion of texts and images.

Information ethics and media ethics are devoted to the protection and autonomy or non-autonomy of works. Business ethics examines the opportunities and abuses that arise from the use of image generators. In addition, legal ethics is in play, especially as it involves boundary issues in law.

5.3 Privacy and informational autonomy in prompts

In generative AI, and in image generators, a prompt is an input from the user from which the system generates an output (Bendel 2023a). A text-based prompt can contain words, letters, special characters, numbers, and URLs. To get the desired result, the prompt must be as unambiguous and comprehensive as possible. If dialogs are provided, as in text generators such as ChatGPT and image generators such as Visual ChatGPT, input can be provided several times in succession to customize the result. An image-based prompt can be any kind of image. Prompts are traded on free and commercial prompt platforms and are usually text-based prompts.

Prompts reveal a lot about the user’s mindset, attitude, interests, tastes, and, ultimately, the person. In this respect, they are not unlike entries in search engines or entries in image searches in search engines and on image platforms. In addition, prompts can contain personal data. This risk may be less with image generators than with text generators. But some users have images of celebrities generated and enter corresponding data. Accordingly, informational autonomy may be violated. This problem is also present with image-based prompts.

Information ethics and media ethics raise questions about data privacy, image protection, and informational autonomy when prompts are entered. Business ethics examines the opportunities presented by trading prompts through platforms. In addition, legal ethics is involved when considering the violation of image rights.

5.4 Responsibility and liability

Most image generators are semi-autonomous systems. Users have to enter prompts for something to be generated—i.e., autonomously processed and automatically generated. In principle, autonomous systems can arise, for example, when the prompts are created and transmitted by one machine and processed by another—a virtual one, as in the case of text, image, and video generators, or a physical one, as in the case of a robot. As with all (partially) autonomous machines, questions arise about responsibility (in the moral and legal sense) and liability (in the legal sense).

For example, an AI-generated image that contains misinformation and is mistaken for a reflection of reality could cause harm in certain contexts. Discriminatory images also raise issues. AI systems and robots cannot bear responsibility from an ethical perspective (Schwarz 2022). Responsibility is always borne by humans, although developers, facilitators, and operators can be many people. On the user side, it may be equally difficult to identify a single responsibility bearer, for example, when it comes to employees or departments in a company with different roles and rights.

Information ethics and media ethics raise questions about responsibility and liability in the context of image generators as autonomous systems. In addition, legal ethics is in play when considering the liability by an electronic person.

5.5 Stereotypical, discriminatory, racist, and sexist depictions

Stereotypical representations stand out in most image generators (Bianchi et al. 2022). For example, women are often presented with long hair, big eyes, large breasts, and a childlike face. A large neckline and limited clothes show a lot of skin. Most artificial women are young and attractive. Men often appear rather angular in face and body while being confident and combative in posture. Some depictions can be understood as discriminatory, racist, and sexist. For example, some generators tend to depict PoC in a light-colored manner or women in suggestive poses or with provocative facial expressions.

Also, programs may have a cultural bias, as they are primarily based on the English language and the images used for training are largely from Western culture, meaning that generated images may therefore reflect stereotypical ideas (Breithut 2022). A majority of European or North American looking individuals are shown. When asked about young, attractive women, images of white girls with brown or blond hair and Western clothing are often generated. However, these are only results from the author’s own tests, which are not representative. There are other findings as well—for example, DALL-E Mini’s obsession with women in saris is reported (Leffer 2022).

Information ethics and media ethics ask about the emergence of stereotypes and discriminatory, racist, and sexist representations in image synthesis. In addition, legal ethics is involved considering the potential for discrimination. Machine ethics can provide methods for limiting queries and outputting content, such as meta-rules and restrictions.

5.6 False representations of beings and things

Forgeries and false information are nothing new in the field of images. Camera-produced images have been manipulated since the beginning of photography. A famous example is the retouching of Leon Trotsky in a photograph (Dreier and Andina 2022). Even earlier in the nineteenth century, there was faking. For example, the head of the American president Abraham Lincoln was mounted on the body of another politician (Finnegan 2005). In the twentieth and twenty-first centuries, post-processing of models and influencers is the norm, performed by the media or by the subjects themselves (Nymoen and Schmitt 2021).

AI-based image generators open up new possibilities and allow manipulation in seconds. For example, images of Donald Trump’s alleged arrest and Putin’s alleged genuflection to Xi were disseminated (Ong 2023). They were quickly recognizable as fake and still generated lively discussions. Boris Eldagsen won the 2023 Sony World Photo Award with his AI-generated image of two woman (who never existed). According to his own statement, he only wanted to make a test with the application and declined the price (Dent 2023). Not only can one manufacture arbitrary content, but it can be disseminated instantly. In communities like Midjourney’s, one sees the products of other members directly, but this means that one is aware of the context, so the risk of manipulation is reduced.

Information ethics and media ethics are interested in falsification, manipulation, and misinformation through image synthesis. Political ethics examines the implications for the political sphere, for example, the weakening of democracy, while business ethics examines those in the economic sphere. Legal ethics is also involved in terms of character assassination and defamation.

5.7 New forms of beauty and new beauty contests

Beauty is associated with perfection in the discipline of aesthetics. It is what we strive for when we take care of our body and face, and what we would like to see and experience in others (Siegmund 2022). Several beauty contests have been decided by AI (Jacoba et al. 2023). Humans—especially women—were thus exposed to the judgment of machines and had to meet their requirements, ultimately based, of course, on human taste. Conversely, in other contexts, humans judged the beauty of artificial humans, such as digital models like Miquela Sousa (Lil Miquela machuca), Noonoouri, Shudu Gram, and Lightning, as well as other avatars. They play an important role in shows and in online and print media. Overall, we live in an “aesthetic society” (Manovich 2020).

The perfection of some representations of people creates a new beauty contest. Now, young people in particular are competing not only against influencers, porn stars, singers, and actors, but also against artificial humans, their gentle skin, their full lips, their big eyes. This can put them under stress and foster new fears of failure. In addition, people chase after unattainable goals. This has been studied not only in the Western context, but also, for example, in Pakistan (Qureshi 2022).

Further, comparisons and beauty contests between artificial humans are conceivable, initiated by media and agencies (Marain 2019). Thereby, not only human characteristics of beauty could be emphasized in the images, but equally new, artificial ones could be introduced. The deviations could be rejected by end users as part of reinforcement learning from human feedback, but also endorsed and thereby reinforced. The spread of the new beauty could perhaps gradually lead to a new ideal of beauty.

Technology ethics asks about the relationship between technology and beauty, specifically the generation of beauty. Media ethics investigates the perfection of images and the role of the agencies and media involved. Business ethics examines the possibilities and consequences of including avatars and (pseudo or quasi) holograms.

5.8 Falling in love with artificial persons

Some artificial persons could be considered very beautiful, either because they conform to or deviate from the common ideal of beauty, or because they satisfy individual ideas of it. Youth and attractiveness, associated with health and strength, are essential factors from an evolutionary biological perspective (Jones et al. 2021). They ensure survival and represent an advantage in reproduction. With technical means, any abundance of youth and attractiveness can be generated and fed into the appropriate channels—for example, social media platforms such as TikTok or Instagram.

The fact that people can fall in love with artifacts has been proven many times. These include dolls, social robots, or (pseudo or quasi) holograms (Bendel 2020). Pictures can also arouse the desire to get to know the apparent person better and to look at him or her from all sides. With some image generators, one can vary the image. However, the results often look like a completely different person, which may disappoint expectations. Thus, if one becomes seriously addicted to an image, it is difficult to get multiple consistent representations. Further progress could solve this problem, and one could get convincing images from all stages of the apparent person’s life. Again, this can be disconcerting, especially looking into the future.

The ethics of technology asks about the relationship between technology and beauty, especially concerning the generation of passion. Media ethics also addresses the problem of the perfection of images and the role of agencies and media involved. Business ethics examines the possibilities and consequences that arise from the inclusion of avatars and holograms.

5.9 Acting out the imagination

Image generators make it possible to create representations that correspond to one’s own ideas and desires. With them, in other words, one can live out one’s fantasy, and even let the fantasy become reality, even though it is only virtual. This refers to the appearance of people, but also to fantasy figures of all kinds, such as landscapes, cities, etc. In this sense, it is a generator of beauty and utopia. Many artists and photographers chase the same experience (Eco 2010).

The creation of people by the user according to his or her own wishes admittedly has a more sinister side. For example, it can involve problematic representations, either because the figure is problematic in itself, such as being nude, or because it is interwoven with problematic actions and contexts. Such phenomena also occurred in Second Life, where child avatars were “abused” (Reeves 2013) with mostly adult users behind them (Bendel 2012b). The majority of image generators try to prevent such outcomes by rejecting explicit prompts, with “hacking” always being used to counteract this setting.

Information ethics and media ethics ask questions about virtual abuse. Business ethics examines the opportunities presented by virtual pornography. In addition, legal ethics considers whether use of child avatars should be punishable.

5.10 Rejections and restrictions

Image generators—like text generators—have restrictions regarding their prompts. If they contain certain keywords or statements, they will not be executed. The programs reject the prompts, for example, citing that explicit or sexual depictions are not desired. It is unclear whether data on these rejections—beyond history—are collected and analyzed. Stable Diffusion attempts to prevent sexual content but allows problematic content in other ways: “it aims to prevent sexual content, it ignores violence, gore, and other similarly disturbing content” (Rando et al. 2022).

This means that the moral rules of the manufacturers are applied without these having to be in the interests of the user. Some refer (in the case of text generators also in their outputs) to the fact that in different cultures and with different individuals, different moral concepts exist, which one must do justice to. However, one could specify in one’s profile what one wants and tolerates. Another reason is that those responsible for image generators do not want to generate a certain content. For example, one does not want the image generators to become porn generators through the preferences of their users. This is rather understandable, because a company has the right to locate a service or product in a specific area.

Information ethics and media ethics are interested in the issue of censorship of image generators in this context, while business ethics is specifically interested in the issue of denial of service. In addition, legal ethics is in play. Moral rules are applied, so this can be considered in the context of machine ethics (Anderson and Anderson 2011; Bendel 2019a). Manufacturers implicitly or explicitly drive machine ethics.

5.11 Production of kitsch

Kitsch “is something that appeals to popular or lowbrow taste and is often of poor quality” (Merriam-Webster 2023). The American dictionary Merriam-Webster says about the origin of the word: “Kitsch is an early 20th-century borrowing from German, and it refers to things in the realm of popular culture that are tacky, like car mirror dice, plastic flamingos, and dashboard hula dancers.” (Merriam-Webster 2023) At the same time, many people love kitsch and are strongly connected to it emotionally (Siegmund 2022).

Many results appear kitschy to the trained observer. This will be due in part to the data used, and in part to the evaluations and classifications in the field of reinforcement learning from human feedback. It is probably exactly this kitsch that appeals to many people and makes them use the resulting images privately and commercially. The photo-realistic images of Midjourney are partly reminiscent of Instagram and the filters used there.

The flood of kitschy images could create new viewing habits. The trained eye and informed taste are lost. Of course, this can also be an opportunity to question old viewing habits. Kitsch obviously has an effect that hardly anyone can escape, in the performing arts as well as in literature. The question arises as to whether classical classifications in art still apply at all in the case of AI.

Media ethics examines the moral aspects of the change in taste caused by a flood of new images. Business ethics is interested in the new markets that are being created as a result and that are displacing others. Art ethics is also involved when considering the question of classical classifications in art (Fenner 2013).

5.12 Dependence on corporations

AI-enabled tools and generative AI are undoubtedly driven by corporations, such as Microsoft, Alphabet, and Meta. One often needs enormous computing power and a corresponding budget for training and development. Still, there are smaller vendors that have shaken up the market. For example, the programs from the German company DeepL are very powerful, both for translating (DeepL) and for correcting and editing (DeepL Write). In addition, the German company Aleph Alpha has developed a powerful language model. Stable Diffusion uses developments from universities, among others.

AI-generated images are very likely to change the market. Image platforms could lose customers, on the creator and user side. At the same time, they are integrating generative AI (Growcoot 2022). Jobs for illustrators and photographers could decline. Instead, a few IT and AI companies will dominate the market with their offerings—or savvy users who can produce the content they want, and who in turn depend on the producers and their business models.

Information ethics and media ethics ask about the moral implications of the changing use of images and the changes in the images themselves. Business ethics examines the dependencies that arise in the course of image synthesis and the possible formation of monopolies.

5.13 Change of business models

In the case of several generators, a certain number of images or generation time is free—for additional content, one must then pay, for example, as part of a subscription. This can be interpreted as attracting users and then making them dependent, which is a common model in the business world (Houde et al. 2020). As with other products and services, many users do not actually have a need for the tool, but are persuaded by the hype, pressure from colleagues and friends, or other factors to use it excessively.

It is unclear how business models will evolve and how fees and costs for single use or subscriptions will increase. For independents and businesses that rely on these services and abandon existing contracts and partnerships, this could present challenges and even existential problems at some point. However, private individuals and companies—such as media houses—may also have the advantage of scalability (Hetler 2023). They can create graphics as needed and do not have to enter into contracts with self-employed persons or create jobs.

Information ethics and media ethics are devoted to this problem area in relation to peer pressure in the use of image generators. Business ethics examines the methods used and the dependencies and shifts that arise.

5.14 Facilitating and changing work

Image generators make work easier and take it away. This applies to experts such as designers and draftsmen (Engenhart and Löwe 2022), but especially to amateurs, who can create high-quality, appealing works with the help of suitable prompts without any relevant skills. This opens up completely new possibilities for them. They can illustrate documents, articles, and books, with relatively good control over the output, even if the best prompts do not always produce the desired result. They can likewise create independent images and works of art.

The work of experts can seriously change in response to this (Davenport and Mittal 2022). They need appropriate prompts and thus linguistic skills. They need to rework, rearrange, etc., the artifacts to fit the context. For some, all this will make the work more productive, for others, less productive. Engenhart and Löwe (2022) show in their work extensive possibilities to renew graphic design. It is clear from their comments that the results can be changed and improved in some respects. Presumably, the work no longer has the same completeness and value as before, and the illustrator is only responsible for part of the process. In the case of the photographer, the work changes completely, because he or she no longer photographs an object of reality when creating photorealistic images, but rather thinks up this object and lets the machine create it. Advertising and marketing departments have to judge images and elements according to new criteria and can also use the generators to create figurative marks, for example.

Information ethics examines the powerfulness or ineffectiveness of prompts as compressed inputs of data and information. Media ethics is interested in the shifts in the competencies of media creation and media use. Business ethics examines the changing nature of work and labor relations.

5.15 Substitution of work

For professional graphic designers and illustrators (to give two examples of professional groups), there are significant risks for their professions. For a long time, creative professions were considered protected but this is no longer true for either text or images (Davenport and Mittal 2022; Arielli and Manovich 2022). Depending on the business model and on the behavior of private individuals and companies, they will retain only some of their work or lose it altogether. The question is whether they will find new niches and whether they will still have the upper hand, for example, in longer pictorial narratives such as comics. However, it is to be expected that more and more image generators will be able to meet even these requirements—which form only a small market anyway.

Marketing and advertising departments, as well as public relations and communications departments, are also threatened. Image series, visual campaigns, CD templates, etc. can be realized at the push of a button, where specialists in this field are not necessarily needed. Logos can be generated and edited. When video generators become established, the pressure will be directed at commercial filmmakers, directors, and cameramen.

Media ethics addresses this problem of the shift in competencies in media production. Business ethics examines the replacement of labor and the emergence of unemployment in the creative and administrative professions.

6 Summary and outlook

This paper first gave an introduction to generative AI and then to applied ethics in this context. Then, three image generators were briefly introduced: DALL-E 2, Stable Diffusion, and Midjourney. This was followed by a detailed ethical discussion of image synthesis. Not only risks but also opportunities were addressed. Each topic was classified into applied ethics, more specifically the fields of information ethics, technology ethics, media ethics, and business ethics, as well as machine ethics.

Ethical and, to a lesser extent, social and legal aspects were dealt with. These can be assigned to areas such as copyright, data protection and informational autonomy, responsibility and liability, deception and manipulation, changes in the standard of beauty and the concept of art, dominance and censorship, dependence on providers, and work support or substitution. Some opportunities could be seen as well as numerous risks. The transformation potential for an important area of the creative professions became obvious. It is formally a second wave of digitization that is taking hold of the image. First, the image itself became virtual, then the activity leading to the image, with the partial loss of a human creator.

Other related points are also of interest. For example, the mostly text-based prompts are a challenge to pay new attention to language as a cultural tool. What is needed are extremely precise descriptions that can train the user’s language skills. In addition, imagination is required as to how the text can become an image that satisfies individual or professional demands. Ultimately, it is a matter of new connections between text and image. Previously, it was common to describe an image after it was created. Now one describes it before its creation and for the purpose of its creation.

All topics could be assigned to areas of applied ethics, which have specialized on individual points and developed their own vocabulary (sometimes even their own methods). Of course, terms and concepts of general, empirical as well as normative ethics are equally relevant. Thus, one can repeatedly argue with human dignity, which is violated with images, or with equality and equal rights. Concepts such as discrimination, racism, and sexism were mentioned in several fields of applied ethics, but of course have an anchoring in ethics as a whole and are the subject of other areas such as political ethics (which was only listed in one place).

Machine ethics was also included. This is often directed at physical machines such as care robots or household robots, i.e., hardware robots. However, the paper also showed the potential for software robots to be included here. Chatbots have been created that can recognize and adequately respond to user problems (Bendel 2018), and a voice assistant that could show empathy and emotion during a Mars flight (Spathelf and Bendel 2022). In the context of generative AI, a new problem arises that is more familiar to search engines: an input or request is rejected by programming moral rules into the chatbot. This certainly makes sense to prevent abuse—but machine ethics thus becomes open to questions of censorship.

The results of image generators are already of astounding quality and expressiveness. It can be assumed that this development will continue and increasingly encompass the moving image, i.e., the video sector. Here, further ethical questions arise that need to be addressed. Among them are some that relate to audio, such as the synthetization of the voice (Bendel 2017). The integration of images and videos in augmented reality and virtual reality is then a further step that must be monitored and accompanied. New ethical guidelines and legal regulations will be needed to do justice to all interests, to promote opportunities and to avoid harm.