1 Introduction

The release of the AI software ChatGPT on November 30, 2022, marked a turning point in the relationship between society and technology. The applicability of artificial intelligence (AI) by a mass audience leads to unforeseeable consequences in everyday life, economy, education, and democratic system. Not the least, the unsuccessful call for a global moratorium on the use of AI by key players from the AI industry itself (see Ienca 2020) shows that a battle of interpretation regarding the future role of technologies in society is in full swing. Social and political debates often focus on the disruptive properties of AI technology, which could lead people and states to a loss of autonomy and control. Various arguments are used that describe the prevention of such developments as possible or impossible.

Any regulation of AI development also depends on which argumentative positions prevail in social discourse. The widespread assumptions about existent or non-existent controllability of AI development influence the extent to which corresponding regulations appear to be sensible and are demanded. The news media are key players in the discourse on AI as they pick up on and shape social sentiment. Therefore, an analysis of the relevant discourses in popular news media is a good way to gain knowledge of society's views on the control and controllability of AI development. This article uses a discourse analysis based on the sociology of knowledge to reconstruct how daily German newspapers frame the relationship between AI and social development. With a view to the historical discussion on technological and social determinism, the article reveals the assumptions about the relationship between power, politics, and AI as a technology upon which media reporting is based. This provides insights into the widespread positions that influence citizens in forming their opinions on AI and its regulation.

2 Background: the technical context of recent AI development

AI has been researched for several decades although a distinction is made between predictive AI and generative AI. Predictive AI derives predictions for future developments from existing data. This technology has been used for some time in the context of machine learning in areas, such as marketing (Ravindar et al. 2022) and healthcare (Wänn 2019). Generative AI produces new data from existing data (cf. Banh and Strobel 2023); it is the technology behind ChatGPT and the image generator Dall-E. By analyzing existing images and texts, generative AI recognizes what texts and images usually look like. When prompted to produce text or images, the software is then able to generate text and image data that appears to be human-made. A central danger commonly attributed to this technology is the lack of verifiability as to whether situations depicted photographically actually took place and whether texts actually originated from the claimed author (cf. Ferrara 2024). The data material upon which the generative AI relies for its learning process is also relevant. For example, if the learning material contains prejudices about social contexts, these are reproduced in the AI-generated materials (cf. Sun et al. 2024). The problem pointed out here is that the AI companies are responsible for controlling the AI learning processes and, therefore, have the power to shape social reality. At the same time, these learning processes are so technically complex that it is difficult for external actors to follow both the internal development of the technology and its impact on society (cf. Pasquale 2019). Another distinction is between weak and strong AI. Current AI technology is described as weak because it has no interests or awareness of its own (Bartneck et al. 2021: 10). A strong AI could potentially be directed against the interests of users, people, or society due to its own interests. However, the possibility of such an AI is controversial and its development is not considered certain (Fjelland 2020). Discourse analysis nevertheless shows that a conflict between the consequences of AI and societal interests exists even in the case of popular, weak AI. These discussions encompass the criticized power of AI companies, fraud, and data protection issues, and politically motivated manipulation.

3 State of research

The general relevance of media discourse for technological developments was made clear by Donk et al. (2012), who illustrated the importance of media reporting to society’s approach to technological leaps. Druckmann and Bolsen (2011) showed that factual contexts presented in the media as well as values influence the formation of social opinion on new technologies. Media discourses also determine how society thinks about AI (Kajava and Sawhney 2023). The social significance of AI is highly contingent because AI has only been discussed in public for a comparatively short time. Therefore, influential political and media actors have the power to charge the discourse with meaning (Jobin and Katzenbach 2023). As stated by Richter et al. (2023), media discourses offer citizens a basis for the formation of “imaginaries of AI.” Using the example of AI, Floridi (2019) noted that depending on the actor, ideas about society’s future with new technologies can be either positive or negative. Society and the media can disseminate narratives about technology that are not based on factual analysis and that instead construct a “technomyth” (Braybrooke and Jordan 2017). The varying views on the extent to which technological development can be controlled by social actors influence the dominant narratives on the future of a society under the influence of AI (Williams 2006).

Previous empirical studies have made the general observation that the topic of AI is intensively addressed and internationally commented upon in mass media reporting (Brause et al. 2023; Chuan 2023). As Bareis and Katzenbach (2022) found in their analysis, newspapers often use future narratives to present the connection between AI and society, regardless of the national context. However, the specific content of the discourses and narratives varies depending on the country (Ossewaarde and Gulenc 2020). Prior to the publication of ChatGPT, AI was often discussed as a general tool for solving practical social problems (Katzenbach 2021; Nagl-Docekal and Zacharasiewicz 2022; Ouchchy et al. 2020). In contrast, Singler (2020) pointed to a religious connotation that she observed in media discourses on AI. Long-term analyses have shown how narratives about AI have changed over the years. For example, Nguyen and Hekman (2022) showed that in the United States, AI was once primarily discussed in terms of its benefits and was later framed with the narrative of a technological threat. In the specific German context, studies are available in various discursive contexts. Lemke et al. (2024) found that in addition to the leading media, politicians also discursively emphasize the technological characteristics of AI in a positive light. Köstler and Ossewards (2022) described how the news media in Germany negotiate society’s future as determined by AI. Carstensen and Ganz (2023) showed in their analysis of the discursive connection between AI, work, and gender that the media also discuss specific topics in connection with AI and society. To date, there have been no studies on the discursive negotiation of social AI regulation and any influences of technological and social determinism in Germany.

4 Sensitizing concept: technological and social determinism and AI

The discussion on technological determinism is a historical and ongoing debate in social and technology research (see Wyatt 2023; historically, see also Heilbroner 1967; Smith and Marx 1994). From the perspective of technological determinism, technology is characterized by the fact that due to its nature, its influence on society is difficult or impossible to control (cf. Sparks 2015). As stated by Jacques Ellul (1967), a representative of a critical theory of technological determinism, this uncontrollability leads to technology having an impact that goes against people's interests and deprives them of their autonomy. This can have an impact on developments within society; however, the development of technology can also influence the relationship between states (Branch 2018). The development of a society-determining technology is justified in different ways. In contrast to critical perspectives such as Ellul’s, a positive social function of technologies is often emphasized. The widespread thesis of competing technologies assumes that technologies are fundamentally necessary to fulfill human needs and that the most suitable ones prevail in society (Cowan 1990). This is followed by a “lock-in” of this technology: In the future, technology will represent an essential and unquestionable component of social processes (Arthur 1989). Another thesis, which is socially pessimistic, indicates that modern societies tend toward unconscious self-subjugation to abstract powers and that technology serves this need (Postman 1993). This process is accompanied by a “techno-optimism” in society, which is based on an irrational superstition (Wilson 2017). From this perspective, technological determinism can simultaneously express itself in such a way that even ethically controversial social developments are justified as an inevitable consequence of technological development (Wyatt 2008). Irrespective of the actual deterministic influence of technology itself on society, some authors point to the realization of this influence by dominant actors in business and politics. As Dafoe (2015) and Demichelis (2022) pointed out, the economic form of capitalism means that technologies will prevail if they are economically viable; this is actively promoted by the relevant corporations and is difficult to stop by society insofar as capitalism represents their shared ideology. In this respect, technological determinism overlaps with a “corporate determinism” (Natale et al. 2019) or a “techno-capitalist determinism” (Demichelis 2022: 35). The theory is also widespread that technological development has a deterministic effect on a decline in democracy in favor of economic factors (Zuboff 2019; McChesney 2013).

The concept of technological determinism is contrasted with that of social determinism. The latter perspective is based on a social constructivist paradigm and assumes that the development of technology is largely determined by human actions and decisions (Bijker et al. 2012; Pinch and Bijker 1984; Joyce et al. 2023). Political actors (Yoshinaka et al. 2003), citizens, or society in general (Winner 1993) are understood to be responsible. Beyond abstract decisions on how to deal with technology, this perspective asserts that the everyday use of technology also shapes technology’s influence on life and social development (Oberdiek and Tiles 1995; Ropolyi 2019). It is particularly relevant here that from the social determinist perspective, various actors with different interests and power resources attempt to determine the significance of technologies (cf. Pinch and Bijker 1984: 416 ff.). Thus, even from the social determinist perspective, depending on the context, individual actors have less influence on the relationship between technology, society, and their lives.

The contrast between the technological and social determinist positions is often resolved in favor of a mediating perspective in which the characteristics of technologies and social and individual action are seen as mutually dependent (Murphie and Potts 2003; cf. Skoric et al. 2015). Concrete social developments and phenomena can always be located on a continuum between social and technological determinism. The social influence of technologies is determined by the extent to which their latent deterministic character is reflected and, if necessary, circumvented by social actors (Swer 2023). This also applies to the influence of technology on the democratic system. Technology is viewed as having an antidemocratic potential, but in principle, this development can be slowed down by active social action (Feenberg 2003). Since individual actors can exert more or less influence on technological development, this article reconstructs the individual and collective actors who are involved in any deterministic relationships according to the newspapers analyzed.

5 Methodology

A sociological discourse analysis of knowledge (SKAD) was carried out for this article. SCAD is based on a social constructivist paradigm and assumes that social actors compete for the interpretation of social phenomena and contexts through discursive contributions (Keller 2011: 49). As actors with a wide reach, media organizations provide interpretations that other actors, such as media recipients, can accept or reject (cf. Ekström 2023). Since commercial news media base their reports on common opinions within their target groups, an insight into their reporting provides an insight into the spectrum of views on a topic that is widespread there. For this article, commentaries, opinion pieces, and editorials in the newspapers Süddeutsche Zeitung (SZ) and Frankfurter Allgemeine Zeitung (FAZ) were analyzed. The SZ is generally assigned to the left-liberal ideological spectrum (cf. Eurotopics 2024a), while the FAZ is assigned to the conservative spectrum (cf. Eurotopics 2024b). Taken together, a discourse analysis offers insights into popular discourses of the bourgeois part of society. All articles from the print versions and websites of the newspapers published between November 11, 2022, and April 30, 2023, were examined. This period covers the publication of ChatGPT up to the publication of the AI-generated image of the Pope and the controversial discussions surrounding it (cf. New York Times 2023). The analysis thus portrays two important milestones in the development of AI and its discussion in the media. The total sample comprises 65 FAZ articles and 48 SZ articles. In accordance with the SKAD specifications, only individual articles were analyzed in depth, which were determined in parallel with the analysis by means of minimum and maximum contrasting until theoretical saturation occurred (cf. Keller 2018: 39).

A phenomenon structure analysis of the entire sample was first carried out in accordance with the SKAD (Keller 2011: 58). The material was coded using the following five structural dimensions:

  • AI developers as social actors

  • AI as a social actor

  • Society as a social actor

  • Structuring and determinisms

  • Images of the future

  • Options for action

The phenomenon structure analysis made it possible to create an overview of the various individual aspects that characterize AI development in the discourse. An interpretive scheme analysis was then conducted in accordance with SKAD (Keller 2018: 32 f.). Individual newspaper articles and sections that showed a particular density in the interpretation of the AI phenomenon within the phenomenon culture analysis were sequentially analyzed in depth. This made it possible to identify the different ways in which the newspaper articles linked aspects of the phenomenon structure and, thus, characterize deterministic influences in the relationship between AI and social development. In accordance with the objective of the interpretive scheme analysis, these interpretive schemes were recorded with catchy terms that represent the underlying views in condensed form (cf. Keller 2018: 32 f.). Despite the different ideological orientations of SZ and FAZ, no differences were found in the characterization of the AI phenomenon that would be large enough to justify a separate presentation of the results for each newspaper. This observation is consistent with previous AI-related discourse analyses (Köstler and Ossewarde 2022). Therefore, the results are presented together, and any stronger weighting of certain understandings of phenomena or interpretive schemes is identified. The results of the phenomenon structure and interpretive scheme analysis are presented in the Analysis section.

6 Analysis 

6.1 Phenomenon structure and interpretive schemes in SZ and FAZ

The phenomenon structure analysis showed what the investigated news organizations understood to be the influence of various actors on the social implementation of AI. Society, including its political actors, AI companies, and AI technology, has different influencing relationships with each other. These relationships are sometimes more explicitly named in the media portrayals and are sometimes more explicitly conveyed—for example, through the vague presentation of images of the future. In conjunction with the postulated or denied possibilities of controlling these relationships, several interpretive schemes emerged that described the positions of the media discourse regarding possible determinisms. Table 1 shows the results of the phenomenon structure analysis that became the basis for determining the interpretive schemes.

Table 1 Phenomenal structure analysis

6.2 Technological history and techno-capitalist determinism

SZ and FAZ frame the AI companies as central players in the development of AI that are largely driven by economic interests (FAZ 22, SZ 9). When the SZ refers to an “arms race” (SZ 23) between the companies, this reveals a dynamic of AI development that primarily begins in the relationship between the companies. When the FAZ describes ChatGPT as unexpectedly endangering Google and shaking up “market relations” (FAZ 8), this reveals a discursive construction of an economy in which large corporations compete against each other, while other potential actors, such as politics or civil society, have no central influence. Accordingly, the newspapers examined also outline a civil society or individual citizens who can do little to counter the social influence of the economically calculating corporations (SZ 7). It is pointed out that the current development corresponds to previous technological developments in Silicon Valley:

Once again, maximum concentration of power goes hand in hand with minimal transparency. At OpenAI, the company behind ChatGPT, only the name is reminiscent of openness. What began eight years ago as a non-profit and participatory venture is now a profit-oriented black box—just like almost all companies that develop AI. (SZ 3)Footnote 1

In this respect, the problematic social influence of AI is not based on the technological properties of AI applications alone but on the combination of specific technological aspects with the power and creative will of corporations. Due to strong opportunities for monetization, the technology of AI appears to be a relevant driver of its development for economic purposes.

A gold-rush mood has broken out among investors. Artificial intelligence is one of the few areas that seems to have been spared from the current misery in the technology sector. While corporations, such as Meta, Amazon, Microsoft, and Alphabet, are announcing large-scale redundancies and venture capital companies are holding back on investments, AI specialists are expanding and have little trouble finding investors. (FAZ 44)

The newspaper articles thus create the image of capitalism as an independent force that makes technological developments economically exploitable. However, the actions of AI companies also appear to be an intentional force. This is underlined by the newspapers’ descriptions that economistic positions within the corporations prevail over others (SZ 41). The scenario depicted in this way can simultaneously point to an economic determinism that influences the AI actors and a social determinism in which the most powerful actors determine the design of AI. However, both possibilities have the consequence for society that the use of AI serve society less well:

AI costs little, works around the clock, does not form unions, and does not resign. In a capitalist system, many companies dream of such employees. AI will not replace people—but managers will replace people with AI. (SZ 3)

For those hoping for "fully automated luxury communism" including full state AI provision, as some activists do, this is not going to happen. Commercial interests are too strong for that. (SZ 37)

Even if the FAZ—a traditional, business-friendly newspaper—states that industry “has left no stone unturned for two decades to make our livelihoods dependent on its products” (FAZ 22), it is evident that the media discourse makes clear attributions to actors. At the same time, the corporate leaders in question would certainly pursue their own ethical ideas for shaping the world, albeit driven by an economic impetus (SZ 34, SZ 41). The technical characteristics of AI applications enable corporations to compensate for any ethically and normatively problematic effects of AI by shaping the discourse in the opposite direction: AI “is finally providing the tech industry with a grand narrative again, according to which the golden future is being created by tech companies” (SZ 37). In the newspapers examined, this relationship is influenced by the fact that the AI companies are said to have a head start in terms of knowledge about how the complex technology works, while the rest of society is suddenly confronted with applications, such as ChatGPT and Dall-E (SZ 7). A potential social defense against corporate power would fail because the corporations would be able to legally defend themselves (FAZ 63). Legal intervention in the further development by companies of AI also appears difficult in the media discourse as it points to the unclear legal responsibility and liability of corporations for the social consequences of their technologies (FAZ 20). The corporations also have exclusive knowledge of the content on which the learning processes of their AI applications are based (FAZ 25). Thus, the creation of discriminatory social developments cannot be regulated by sociopolitical actors (FAZ 62). At this point, a social determinism emerges to the effect that the AI companies, as social actors, are forcing the social use of AI and influencing its content. When the SZ describes the approach of AI companies as “Deliver first, maybe improve later” (SZ 6), this reveals the image of an industry that can exercise its power beyond the expected sanctions. In addition to civil society, the dependent relationship also applies to the political system. On the one hand, newspapers are calling for political and legal regulation of AI development (FAZ 62); on the other hand, little trust is shown in legal regulation when, for example, the FAZ writes an “appeal” to companies to act responsibly (FAZ 1). This is underlined when the FAZ writes:

After all, the fight against disinformation is not a new one, and many institutions have been working for years to find solutions. However, current technological developments have made it even less clear where these solutions are to come from in the short time available. (FAZ 1)

The overall picture that emerges is one of low confidence in the ability of corporations to regulate. However, this discursive perspective is not coherent across the newspaper articles. For example, the reference to possible concerted action between science, politics, business, and society (FAZ 63) shows that a fundamental influence on the future development of AI is seen as possible, depending on the author.

The question of society’s influence on AI development is further addressed by presenting AI technology as an expression of a technological history that has been going on for some time. According to this, people have for centuries been inventing technologies that have required them to adapt their everyday lives (FAZ 6) and that had effects over which they did not have complete control (SZ 1). The unclear authorship of documents was also already in existence due to other technologies apart from AI (FAZ 55). Thus, the technological and social history does not explicitly appear to have a determinative, that is, teleological, influence on the current role of AI; however, the picture that emerges is of a historically continuous development of technology in which the current actions of the corporations can also be seen. When the SZ points to the exponentially accelerated development of AI (SZ 24), which only the corporations can use efficiently for their own benefit, society’s ability to act against these developments appears to be even more limited. At the same time, the newspapers point out that sometimes even the AI companies do not fully understand the technical processes and modes of action of their technologies (FAZ 64). This creates the discursive impression of an overall fragile situation in which society is exposed to a technology that is only barely contained by selfishly motivated economic actors.

Overall, the discursive context depicts a situation in which historically unfolding further development of complex technologies encounters a capitalist economic system and corresponding economic actors. These are continuously working toward an economic determination of society and are using the latest technologies to this end, whereby the extraordinary complexity of AI technology and the difficulty of legally regulating AI development are leading to an accumulation of power in the AI industry. The central interpretive scheme is named here as historically conditioned techno-capitalist semi-determinism. The scheme is referred to as semi-determinism because it remains unclear in the discourse to what extent the economic actors act intentionally or as a logical consequence of the capitalist system. The difficulty of AI companies to control AI also raises the question of the extent to which the technology will exhibit uncontrolled and, therefore, deterministic behavior, at least in the future.

6.3 AI technology and the (social actors in) society

The analyzed media discourse describes an interference between specific technical characteristics of AI applications and social needs and behaviors. According to the newspapers, the beneficial or dangerous effects of AI technology on society only emerge through its concrete application in society.

There is no such thing as evil technology. What matters is how people use it. This can be seen these days in the news about artificial intelligence (AI): the future belongs to adaptive software that can process gigantic amounts of data. The only thing that remains to be seen is how unpleasant this future will be. (SZ 22)

By naming a potentially “unpleasant” future, this reveals a negative view of AI development. The discourse goes on to describe the ability of AI applications to manipulate people (FAZ 59; SZ 3). The technology would encounter people who are easy to manipulate; this is also linked to people's arrogance toward the power of technology.

The past twenty years have shown that many users were not up to the addictive mechanisms, behavioral manipulations, and deceptions of the digital world. This has often overshadowed the achievements of digitalization in recent times. When it comes to the development of artificial intelligence, which accelerated exponentially around two years ago, the public still gloatingly believes it is safe. (SZ 5)

An expression of technological determinism can be seen in the fact that people are at least partly involuntarily subject to the consequences of technology. This is hinted at when the SZ writes that people are “not up to” the effects of technology. The FAZ’s reference to society’s inability to develop digital literacy in a timely manner (FAZ 34) extends the pessimistic analysis to the future, with the newspaper also stating that digital literacy would not help against susceptibility to manipulation (ibid.). Such examples of discourse show the difficulty of clearly identifying any forms of social or technological determinism. The possibility of inhibiting manipulation by legal and technological means, at least in the future, is not fundamentally disputed but appears to be less realistic against the backdrop of the discursively presented accumulation of power on the part of AI companies as well as the diverse actors who, according to the newspapers, use AI for the purpose of manipulation (FAZ 52).

The discourse analysis also diagnoses a pre-existing skepticism toward the truthfulness of media-mediated content in contemporary society. The SZ writes that “the terms ‘fake’ and ‘truth’ have become a part of the discourse, and a rigorous mistrust of the media has become commonplace in certain circles (SZ 7). According to the newspaper, AI is nevertheless being accepted by society because it fulfills human needs for relief in work and everyday life. For example, AI applications in areas as diverse as child education or the justice system would bring everyday benefits to the extent that the people involved would be expected to use the technology even if they had scruples (SZ 21, 32). The newspapers place AI in the context of the function of the Internet and write: “Everything that makes life easier in the digital space, that costs little or nothing and saves time and personnel, will prevail” (SZ 9). AI once again proves to be a less exceptional phenomenon, as it follows a technological and social history. The fundamental social framing by a capitalist economy is accompanied by the equally existing human characteristic of responding to technology. However, the discourse remains unclear as to what extent human convenience and the search for everyday relief should be seen as a consequence of capitalism, technology, or as an anthropological constant. For example, when the FAZ describes how people seek self-affirmation when using digital media and pay less attention to sources (FAZ 34), this points to a psychological phenomenon that, depending on the world view, may be part of human nature in general or a consequence of the egocentricity of capitalism. The question of the extent to which people consciously decide in favor of their comfort and thereby include or ignore other aspects of consideration, such as ethics, is also not answered. This is relevant, for example, against the backdrop of the social–psychological-oriented post-Marxist history of theory. From this perspective, members of society in modern capitalism find themselves in a “delusional context” (Adorno 1997 [1970]) and alienated from their fellow human beings due to their orientation toward consumerism and self-will (cf. Plass 2016). At the same time, citizens are ascribed at least partial responsibility for their behavior (cf. Vázquez-Arroyo 2016: 181 ff.). By not explicating this relationship in more detail, the SZ and FAZ leave it to their readers to assess any relationship between determinism and freedom of action.

As a further AI characteristic, the discourse describes the technologies as having no morals of their own (FAZ 62). The FAZ states that human values are the basis of morality and cannot be substituted by AI:

The accuracy of data and information, which this technique equates with mathematical probability, is justified by the frequency of its repetition. But stochastics produces neither truth nor human evaluations. (FAZ 63)

According to the discourse, AI technologies promote a rationalistic social relationship in society. SZ writes about an imaginable “algorithm that automatically pushes the unemployed into jobs and the total screening of migrants. The interests of those affected would be irrelevant to the machine—and, therefore, also to the bureaucracy” (SZ 22). This case is exemplary of the relationship between social and technological determinism because the diversity of actor relationships becomes clear. Society, the political system, or individual institutions, such as the employment office, can decide on the use of AI, with the consequences affecting individual citizens or social groups. Depending on the context, this influence cannot be escaped. Thus, AI offers different capacities for shaping social conditions.

Overall, the interference between human needs and technological peculiarities as well as an AI-related ordering of society in terms of social relations and power structures stands out in this discursive argumentation context. Once again, a clear technological or social determination cannot be established due to the unclear human freedom of action; however, the social importance of AI tends to be predetermined due to its usefulness for everyday interests. Therefore, the interpretive scheme for this context of argumentation is named the technological semi-determinism of need satisfaction and social restructuring.

6.4 Meta-political framework conditions for AI development

SZ and FAZ described other overarching political processes as relevant. For example, the newspapers outlined an international authoritarian state development that, in turn, interferes with the technical possibilities of AI applications:

Control of the social media sphere will be a power factor for the foreseeable future. The tech sector is becoming political prey. AI offers new opportunities to conceal state spying and propaganda activities. (SZ 22)

In this picture, it is precisely the supposedly undesirable side effects of AI, such as manipulation, that can serve powerful actors, such as state regimes. The mentioned news skepticism in society is not understood as the attitude of individual citizens. Rather, contemporary society is in a general struggle for the truth and the interpretation of social contexts (SZ 7). In turn, the technical properties interfere with existing social processes and needs: Through the generative capabilities of AI applications, the perceived reality in society can be actively generated by powerful regimes. By emphasizing global authoritarian developments and a general struggle for reality, the newspapers draw a form of socio-historical determinism in which the use of AI technology for manipulative purposes appears logical. However, the articles examined also leave open the extent to which these citizens are deterministically at the mercy of the use of AI for authoritarian political purposes, submit voluntarily, and could potentially take countermeasures.

In addition to global authoritarian developments, the SZ and FAZ discourse also mentions the tendency toward global nationalism, which would specifically hinder the necessary regulation of AI development:

Global cooperation could help to counter the super corporations. But the tendency is more toward nationalism or blocs of states with interests and views that have little to do with the idea of Western-style democracies. (SZ 24)

This shows the overlap between social and technological deterministic aspects: The complex technical characteristics of AI mean that the technology has potentially unforeseeable consequences for the world. However, it is possible for social actors—especially states—to exert an influence, but this does not materialize due to a lack of cooperation among them. There is an overlap here with the discursive representations of capitalism as well as the global authoritarian turn: It is unclear for what reasons AI technology ultimately emerged and whether it can be influenced by society; however, the technology encounters a socio-historical constellation in which specific modes of use and specific influences on society are at least obvious. The interpretive scheme for this connection is described as a global-historical techno-social imprint on several levels. This includes the observation that various actors have more or less influence on the social impact of the development of AI technology and, moreover, that international developments lead to the assumption of mutual influences between social actors and technology, although not deterministic relationships.

6.5 Options for action in contradiction

SZ and FAZ describe and recommend various social options for dealing with the development of AI. The ambivalence and incoherence of the proposals shows how the newspapers implicitly and explicitly doubt a far-reaching ability to regulate. As media organs, they themselves become a social actor since they potentially inhibit society's belief in the ability to regulate and any intentions to act. The ambivalences can be seen, for example, in the recommendation to establish global intergovernmental cooperation to regulate AI (SZ 24). The effectiveness of this recommendation is relativized because the discourse reveals a global tendency toward national isolation. However, the discourse of SZ and FAZ is inconsistent here; other articles also call for the establishment of national or European legal frameworks and AI competencies to overcome the concentration of power in China and the United States (FAZ 18, 62). The FAZ writes:

The most important technical developments are once again coming from America, but this time, Europe is preparing to intervene at an early stage. The EU is working on an AI regulation with different risk levels. It is a balancing act: the boundaries for responsible algorithms must be drawn so tightly that no application encroaches too far on the fundamental rights of citizens. On the other hand, a climate should be created that promotes innovation. (FAZ 62)

The reference to necessary innovations explicitly shows how the newspaper itself considers the capitalist economy as the background to AI development and combines this with an accumulation of power on the part of the corporations that should be avoided as far as possible. Here, social determinism in its various manifestations is shown to be relevant for influencing AI development: A conflict between corporations and regulating states as relevant social actors emerges. Civil society and individual citizens appear in all variants of this discourse as subjects with little capacity to act. This is nevertheless concealed when, for example, the FAZ writes about AI that “we [must] think carefully about how much responsibility we transfer to it” (FAZ 32). The description of a “we” that would decide how to deal with AI conceals the multilayered nature of social power structures. As described, different social actors have different competencies to influence the development of AI, while others are passively affected by these developments. The narrative of society’s ability to act is, in principle, thwarted, not least rhetorically, by the recurring references to concretely painted visions of the future. For example, when changes in the workplace (FAZ 14) or a lack of control over technologies (FAZ 41) are mentioned as future facts, this at least denies that these developments can be changed. For the readership, this creates the image of supposed regulation of an AI development that is based on contradictory proposals, while nothing can be done to change the fundamental trends of technological progress.

The last relevant thread of the discourse is explained against the backdrop of the powerful role of AI and AI companies and the foreseeable lack of regulability. SZ and FAZ recommend adapting social systems to the inevitable influence of AI. By describing this in detail for individual areas, such as education (FAZ 25, 28) or data protection (FAZ 21), the importance the newspapers attach to this measure becomes clear. The suggestion that a human should give final approval for all decisions made by AI (FAZ 39) also appears to be a pragmatic way of adapting to the technical influence through a moderating element.

It is striking that these practical proposals for dealing with AI relate to the micro- and meso-levels of institutional and individual action. Taken together, a picture emerges in which individual citizens and institutions should, according to SZ and FAZ, take on the task of responsible use of technology as there is no regulation at a higher level by AI companies and governments. In this respect, the role of individual actors in the socially deterministic relationship surrounding AI development must be considered in a differentiated manner. As members of society, individuals are potential victims of disinformation, manipulation, and the rationalization of social conditions. At the same time, it is possible to use AI in everyday life and at work according to an individual’s or institution’s own ideas. Therefore, it is possible to observe an individualization of the use of AI under the umbrella of socially shared developments. Insofar as individual citizens actively use AI in their own interests, they simultaneously accept and affirm the role of AI in society—if we follow Adorno's ideology theory on the relationship between the individual and society (see Grant 2014). At the same time, they are making themselves even more dependent on AI companies and governments, and their power over AI technologies is foreseeably not up for grabs. From this perspective, a complex web of technological and social determinism emerges in which individual citizens have little influence on the overarching lines of development of AI technology; however, their behavior contributes to the long-term implementation of AI in society. Even the implementation of AI in the educational or legal system, once it has begun, cannot be reversed in the foreseeable future; if it meets society's need for convenience and increased efficiency, it is difficult to question whether it makes sense to use it. This creates a dependency on the technology. Whether the characteristics of the AI itself, the history of the technology, or the AI companies are the most powerful determining factor cannot be easily determined, but this does not change the general relationship of dependency.

Finally, a somewhat specific discourse pattern can be recognized in the demand that AI applications should not give the appearance of a human counterpart:

There is currently little evidence of the possibility of real artificial intelligence in the broad human sense. It would therefore be more honest to speak not of AI, but of SI, of Simulated Intelligence. Digital learning machines such as ChatGPT should not be dressed up as a fully competent human chat partner, even if they sometimes claim alibi-like that they are not. (FAZ 52)

The newspapers point to differences in the natures of AI applications and humans, the realization of which in everyday life could influence the impact of technology on society. In this context, it is also emphasized that only humans are capable of authorship in the usual sense and of love (FAZ 57). This can be understood as a reaction to the described manipulability of humans: By making an AI recognizable as a technology, users are given the opportunity to classify the information they receive accordingly. In addition, the reference to the genuinely human quality of love can be interpreted as the assumption that people have needs that cannot be fulfilled by AI. After all, the fact that people have sought and needed love on various levels since childhood is one of the well-known foundations of human psychology. The discourse assumes that people orient their actions toward the fulfillment of their needs. In this respect, a reasoning–theoretical window opens to the effect that the penetration of society by AI is opposed by citizens, at least in areas in which AI lacks specifically human characteristics. Here, too, the newspapers’ proposal is merely an assumption that does not necessarily point to the agency of individuals and a corresponding socially deterministic component. For example, positions from critical theory—albeit also largely theoretically speculative—point out that particularly in modern capitalism, interpersonal needs, such as love and friendship, can be substituted or displaced by commodities and technologies (cf. Fromm 2013 [1976]). In this respect, it remains to be seen to what extent the implementation of AI in everyday social life goes hand in hand with unfulfilled or repressed human needs.

7 Summary

The article reconstructed three central interpretive schemes that are accompanied by a number of largely contradictory recommendations for action. The interpretive scheme of historically conditioned techno-capitalist semi-determinism paints a picture in the discourse of SZ and FAZ in which AI is only the current expression of a longer-lasting technological history. Capitalism, specific technical characteristics of AI, and economically active AI corporations with a high degree of creative power result in a situation in which complex mutual determinisms exist between technology and social actors. The interpretive scheme of semi-determinism of need satisfaction and social restructuring then points to the role of society and its members in this constellation. Society and a government that is only partially capable of regulation are inferior to the AI companies due to their power. At the same time, AI applications satisfy people’s basic needs and are, therefore, gaining acceptance. A restructuring of society in the direction of rationalist coexistence is a side effect, although it remains unclear to what extent people, in principle, would be able to prevent this. In the scheme of global-historical techno-social imprinting on several levels, the emphasis of global authoritarian and nationalist tendencies is expressed by the newspapers. In the context of a struggle for truth, it is precisely the supposed disadvantages of AI, that is, manipulation, that are in the interests of powerful actors. This results in a further shift in social power, with civil societies and individual citizens facing not only AI corporations but also governments. Against this overall background, the last section showed how the SZ and FAZ themselves shape the future handling of AI regulations through the contradictory nature of their recommendations for action. Most of the proposals are contrasted with other contributions to the discourse that make the aforementioned options for action appear ineffective. Society’s ability to act in the techno-socially deterministic constellation thus consists of adapting to AI technology and the power of corporations, whereby this adaptation will perpetuate the dependency relationships.

The discourse analysis showed the interpretive schemes with which a relevant section of middle-class society in Germany is regularly confronted. Further research would need to investigate the extent to which newspapers react to individual events in the context of AI development and adapt their interpretations of the technology. Future AI developments are difficult to predict and, as with the introduction of ChatGPT or Dall-E, assessments of the technology can only be made after events have taken place. It also remains to be seen to what extent society and its institutions will implement AI applications in their everyday lives and what consequences this will have for the dependency relationships mentioned. In particular, the extent to which AI companies and governments will collaborate in the future and what role civil society will play in this will need to be observed. As Kerr et al. (2020) pointed out, a legitimacy gap may arise in the context of AI if governments oppose society's ethical ideas and steer technological development for other purposes. Looking ahead, Paltieli (2023) pointed out that the understanding of democracy as such may change with the development of AI. This would represent a deeper adaptation of society to technological developments, which would potentially also be expressed through discursive negotiation in the news media.