1 Discussing AI—negotiating gender

Recent advancements in artificial intelligence (AI) have been accompanied by high expectations regarding their game-changing impact on labor.Footnote 1 When thinking about how AI will change the world of work, however, we must consider that both work and technology are gendered. First, women still face vertical and horizontal segregation in the labor market, with fewer opportunities for advancement and an unequal valuation of their work reflected in gender wage gaps, working conditions, and job security (England 2010; European Commission, Directorate-General for Justice and Consumers 2018; Federici 2021; German Federal Statistical Office 2016; Holst et al. 2015; Minkus and Busch-Heizmann 2020). This is exacerbated by gendered organizational cultures (Acker 1990) and the unequal division of paid and unpaid work (Bundesministerium für Familien, Senioren, Frauen und Jugend 2020; Federici 2021). Second, technology remains a field dominated by men, with gendered divisions of labor, barriers to access technical professions, and associations with power, prestige, and masculinity (Hicks 2017; Wajcman 1991). Women continue to be underrepresented in technical professions (Young et al. 2021). But as Gender and technology studies have repeatedly shown, the relation between gender and technology is often reassessed in the course of new technologies (Wajcman 1991, 2004). As Wajcman (2004) has shown, the introduction of new technologies can irritate and destabilize gender relations and can thus function as an occasion to renegotiate power, and, for example, to set role attributions and divisions of labor in motion (Carstensen 2019). Gender is thus negotiated in technology development, in everyday use, and not least in public discourses.

This paper thus examines how gender is addressed in a specific section of the public discourse on AI and the future of work, focusing on news media in Germany. As a socially constructed category, gender is historically and culturally variable. In Western societies, gender has long been derived as a binary category from the heteronormative understanding of two distinct binary sexes. In recent years, the fact that gender is not be reduced to two binary categories has gained increasing cultural and legal recognition. However, binary conceptions of gender still dominate public discourse, as can be observed in the media discourse on AI, work and gender, which mostly talks about men and women. Furthermore, gender continues to be hierarchically organized and produces, in intersection with other power relations such as class, race, and ability, social inequality. To contribute to a critical knowledge of the always shifting construction of gender and its relation to power, we are interested in how gender as an intersectional category is taken up in the AI discourse through, for example, old and new stereotypes and gendered scenarios of the future of work.

Germany is a particularly interesting case for examining gender and work in relation to AI. As Köstler and Ossewaarde (2022) have shown, “AI is framed as the cure-all for present and future problems in Germany”. AI is seen as a key component of the fourth industrial revolution, during which Germany is to retain and expand its role as a leading industrial nation. AI strategies presented by the German government focus accordingly on the industrial application of AI, urging for a rapid uptake of AI in research and development, while at the same time emphasizing the importance of ethics and values (Bareis and Katzenbach 2021; Köstler and Ossewaarde 2022). In public discourse, AI continues to be a widely discussed topic, with attention given to economic benefits of AI, its conditions in terms of economic policy, education, and regulation (Schiff et al. 2020), as well as concerns like AI superintelligence (Naudé and Dimitri 2020), resource waste (Bender et al. 2021), unemployment (Frey and Osborne 2017), or algorithmic bias and oppression (Benjamin 2019; Noble 2018; West et al. 2019). In terms of policy, debates on AI are intertwined with economic policies to strengthen and secure competitiveness. Consequently, Germany’s national AI strategy aims at securing a future for the German economy vis-à-vis competing national economies like the US and China (Bareis and Katzenbach 2021). Within the EU, German digital policy is integrated into a complex multi-level system. The General Data Protection Regulation has proven that the EU is an innovative regulatory landscape for digital policy that is decisive even for market participants outside Europe (Roberts et al. 2021). The Digital Markets and Digital Services Acts and the proposed AI Act will have a similar impact, as products will have to undergo auditing in order to enter the European market, for example (e.g. European Commission 2020; Veale and Borgesius 2021). In the European legislative process, which includes the consultation of diverse interest groups as well as lobbying, economic interests must be balanced with civil rights, workers’ rights, anti-discrimination, and gender equality. Gender knowledge conveyed in AI discourse serves as a background for political negotiations as well as for design and implementation processes.

Public discourses, especially media discourses, play an important role in this situation, as previous studies have pointed out. Bareis and Katzenbach (2021) have developed an instructive understanding of AI discourse as performative, concluding that national AI strategies construct the picture of an “inevitable, yet uncertain, AI future” with AI imaginaries that at the same time reflect “cultural, political, and economic differences”. This emphasis on inevitability is, as Zuboff (2019) shows, a widespread ideology in technology discourses, which is connected with economic imperatives. Similarly, Schlogl et al. (2022) argue that the future of work discourses „create realities in that they affect the availability of funding, shape policy agendas or public perceptions” (Schlogl et al. 2022). Their analysis shows that the future of work is imagined as being determined by technological developments such as AI and Industry 4.0, which negates human agency. Both AI and the future of work have been subject to critical attention. However, questions of gender and intersectional power relations still need to be addressed in this area (Howcroft and Rubery 2019). Seeing that AI has been intensively discussed in the context of discrimination and gender relations, this is surprising. This paper aims to fill this gap by examining how AI, gender and work are negotiated in German news media. Using the methodological lens of interpretative frames, we understand mediated representations of technology and gender as performative. On this basis, we can show that AI in the context of gender and work is understood not as determined but socially constructed, and that some critical feminist interventions are taken up, while others are omitted. The results show that gender stereotypes are perpetuated and transformed in journalistic works on AI and the future of human labor, which, following an understanding of discourse as performative, is consequential both for AI policy and for the design of AI.

In the following, we first provide a review of the literature on gender, AI, and the future of work. After outlining our methodological approach, we dive into the empirical findings, looking at algorithmic bias, automatization and enhancement, and gender stereotypes. Finally, we discuss the implications of those findings.

2 Research on gender, AI, and the future of work

In recent years, a considerable amount of literature has been published on AI and gender. Focusing on how AI both shapes and is shaped by work, we were able to identify three key areas of research: first, studies that explore the gendered implications of work being managed with the help of AI systems (management with AI), second, research on the potential changing of tasks, occupations, and the workforce (working with AI), and third, research that is concerned with the development of AI (working on AI).

Studies on the gendered implications of AI-based systems for the management of work have focused on platform work (European Institute for Gender Equality 2022; Gray and Suri 2019; Tubaro et al. 2022), automated decision making in the allocation of resources for jobseekers (Allhutter et al. 2020; Lopez 2021), and, most notably, the potentials and risks for gender equality in automated hiring processes (Hong 2016; Kim 2019). These studies show that AI-based management could potentially exacerbate gender inequality through the reproduction of historic biases and an intensified exploitation of labor, which hits disadvantaged groups and people responsible for care work particularly hard.

Moreover, several studies have begun to examine the implications for gender that lie in the changing of tasks, occupations, and workforce composition due to an increasing use of AI-based systems and robotics in many industries. This research on working with AI has long been dominated by a macro perspective on potential major shifts of workforce composition due to computerization (Arntz et al. 2016; Frey and Osborne 2017). Predictions on the impact of computerization on professions dominated by either men or women have been inconsistent. For example, Piasna and Drahokoupil (2017) write that “women are more at risk of automation as they tend to perform routine tasks more often than men, even within the same occupational category” (Piasna and Drahokoupil 2017). In contrast, Peetz and Murray (2019) argue that structural and technological changes will slightly favor the security of women’s occupations. Moreover, there is no consensus that there will be huge upheavals in the labor market (Dengler and Gundert 2021; Pettersen 2019; Willcocks 2020). As Howcroft and Rubery (2019) have critically noted, there is “little evidence of large-scale unemployment arising from new technologies. The real problem lies in the unequal distribution of work, time and money that currently exists.” Consequently, they call to “rethink the structures of employment, the forms of work and the gendered patterns of inequality that are embedded in current arrangements” (Howcroft and Rubery 2019). In addition, increasing attention has been given to changes in work content that AI might bring about and the skill sets needed to work in changing environments. These include basic and specific knowledge in statistics and programming, but also the technical skills necessary for the contextualization of algorithms and training data (Pfeiffer 2020).

Finally, a considerable body of research has been exploring the relationship between gender and working on AI. Studies have explored the gendered composition of the AI and data science workforce (Young et al. 2021), AI research (Stathoulopoulos and Mateos-Garcia 2019) as well as platform and crowd work in the preparation, verification, and impersonation (Tubaro et al. 2020) of AI (Gray and Suri 2019). Recently, Young et al. have shown that women have higher turnover and attrition rates than men (Young et al. 2021). While this work relates directly to labor, other projects have examined how gender interacts with software applications, robots, smart assistants, and other products that result from research on AI (Adams 2020; Berscheid 2014; Kubes 2019; Sutko 2020). An important subsection of this strand of research deals with discrimination that results from and/or is built into large language and image recognition models (Caliskan et al. 2017; Gebru and Buolamwini 2018; Keyes 2018; Leavy 2018). As Noble’s (2018) and Benjamin’s (2019) seminal works have shown, algorithms are not neutral, but rather intersectional oppression is deeply inscribed in data and design processes. This raises deeper questions about the epistemologies that guide work on AI, which have been addressed in a cross-disciplinary methodological and conceptional discussion of “algorithmic fairness” (Barabas et al. 2020; Barocas and Selbst 2016; Hoffmann 2019; West 2020). Looking at gender relations and power dynamics, West et al. have argued that there is an intertwined relationship—or “feedback loop” (West et al. 2019)—between discrimination in the data science workforce and algorithmic discrimination. Similarly, Carrigan et al. (2021) suggest that gender harassment in tech workplaces and data extraction can both be interpreted as boundary violations.

This research paints a complicated picture of the relationship between AI, gender, and work: On the one hand, inequality threatens to be cemented. On the other hand, through critical AI research, biases can be made visible and dominant perceptions of gender can be challenged. This points to the open, contested nature of the field and thus the importance of mediated images of AI, gender and the future of work. Regardless of the material realities of AI technologies in the workplace, we therefore think it is important to also look at the public discourse on AI.

3 Methodology

It is known from earlier waves of technological change that technologies are often occasions for discursive struggles (Carstensen 2007; Pfeiffer 2017). Technologies are surrounded by affective discourses, in which various actors engage in a struggle for interpretive power, seeking to assert their interests in relation to technology (Carstensen 2007; Ganz 2018). For gender and technology, feminist discourse on the internet can serve as an example. In its early years, the internet was framed by contradictory expectations of backlash, liberation, and deconstruction (cf. Haraway 1991; Plant 1997; Spender 1995; Turkle 1995). For the digital transformation of work, this was recently evident in the “pervasive presence in 2015 of talk about Industrie 4.0” (Pfeiffer 2017, p. 122) in Germany. “Industrie 4.0” is a catch phrase, much as AI, that signifies the new industrial revolution of a digitized and “smart” industrial production. These debates are shaped by business and governmental actors, but numerous other actors make their voices heard.

By shaping the expectations, and consequently influencing product development, design and adoption, discourses have performative effects on technology. To understand the gendered implications of new technologies in the context of work, it is therefore essential to analyze public discourses. Following Foucault (1998), we understand discourses as social practice. As a methodological framework, we draw on Keller's Sociology of Knowledge Approach to Discourse (Keller 2011). This approach incorporates a perspective on discourse both based on the Foucault and a sociology of knowledge. It looks at how reality is produced as “power-knowledge” (Foucault 1998) in a contested manner. Keller suggests investigating how meaning is constructed, objectivated, and legitimated through institutions, organizations, and social actors, including its effects and consequences (Keller 2011, p. 48). In this vein, we are interested to explore how AI is constructed within public discourse, and particularly, which are the contested problematizations of AI in relation to gender and work in German news media.

We have approached this in two analytical steps: First, we have conducted a content analysis (Mayring 2000) to obtain a systematic overview. Second, we have zoomed in on the “interpretative frames” that characterize the discourse. Referring to Keller, interpretive frames can be conceptualized as “fundamental meaning and action-generating schemata, which are circulated through discourses and make it possible to understand what a phenomenon is all about.” (Keller 2011, p. 57). The resulting interpretative frames engage with different knowledge about gender, technology, and work and embed these issues in a larger social context. The contested nature of discourses on technology is also visible through the linking of frames to empty signifiers (Laclau 1996) which represent universal claims and speak to the politically contested nature of the topics at hand.

Our analysis is based on the 178 media articles that were published in German newspapers and newsmagazines between 1 January 2015 and 31 December 2021 (see Appendix 1 in the supplementary file). Although discussions about AI have taken place for decades, this time period was chosen because it coincides with a significant increase in discussions regarding digital transformation, along with immense government investments and the widespread availability of intelligent tools such as Siri or Alexa in everyday life. This period also saw a renewed interest in AI, which was reflected in an intensification of discursive debates. Included were four exemplarily selected, high-circulation daily newspapers (Die Welt, Frankfurter Allgemeine Zeitung, Süddeutsche Zeitung, taz. die tageszeitung) as well as the leading German weekly newspaper Die Zeit and newsmagazine Der Spiegel. The corpus consists of articles published in digital and print editions, including supplements and Sunday editions. In order to survey the media discourse in its complexity, all possible types of articles from all sections were included: News, reports, interviews, features, and columns from the news, business, culture and science sections. We assume that radio and television journalism do not contain fundamentally different content within the media discourse, so we refrained from including audio and video material. For pragmatic reasons, the analysis was limited to textual media as a proxy for quality journalism, representing the section of public discourse we will in this paper refer to as news media discourse for short.

To identify relevant articles, a multi-part search phrase comprised of work-related terms (e.g. work, occupation), AI-related terms (e.g. artificial intelligence, machine learning), and gender-related terms (e.g. gender, sexism, discrimination, diversity, girls, men, women, equality) was applied.Footnote 2 Assuming that we should be able to identify relevant articles with generic keywords, e.g. “gender”, terms like “nonbinary” or “queer” were not included in the search. This initial query resulted in a total of 3585 articles that were then manually screened and selected according to thematic fit. For the corpus, only those articles that at least discuss AI in relation to either gender or work while addressing the respective other topic where selected. Articles that do not address AI in the context of either gender or work were excluded, as were duplicates.Footnote 3 The total 178 articles are of various types and genres, including news reports, feature articles, interviews, and columns from news, economy, science, and culture sections (Table 1).

Table 1 Distribution of articles across publications

Using MaxQDA, a first thematic coding was conducted at the level of individual articles. The inductive categories resulting from this initial content analysis provide an insight into central themes. The codes covered a wide range of approaches to the topic, including visions, applications, affected professions, business cultures, and gendered artifacts. The frequency with which some of the codes were assigned is an indication of prominent frames. The five topics that were found most often in the sample are “recruitment” (63 articles), “social homogeneity of developers” (32), “biased data training sets” (31), “automatization” (26), “AI applications that foster equality” (23).

Next, 20 articles were identified in which several of the identified topics are discussed in detail and in connection with each other. These articles discuss AI and work centrally from a gender perspective. They are therefore well suited for a more detailed analysis, which was conducted as a second step. As a result, we identified major interpretive frames that shed a light on the emerging public understanding and negotiating of AI and the future of work in relation to gender. In the next section, we will introduce in detail how AI is represented, how AI is seen in relation to the future of work, and which images of gender are central regarding AI.

Our interest was focused on the question of how gender and work are framed in the AI discourse. Thus, we were primarily interested in the news media discourse as a whole. Our analysis did not aim to highlight differences between the newspapers and magazines, even though they are undoubtedly politically situated in different ways. Rather, we considered the selected newspapers and magazines as components of a broad discourse and as largely mainstream contributors to the hegemonic public sphere. The goal of our analysis was to identify the overall picture that emerges across newspapers, rather than to differentiate the discourse by individual media. Similarly, our analysis was not interested in the individual persons/authors behind the articles, because we do not find it crucial for our question who contributes something to the discourse from a specific position, but rather primarily the discourse as a result of different and diverse speakers, i.e., the question of which interpretive frameworks emerge in the broad discourse.

4 Findings: discursive frames of AI, gender, and the future of work in the news media

In the following, we present the central interpretative frames that we were able to identify by analyzing 178 German newspaper, magazine, and online articles, starting with two pertinent topics of the AI debate. With regards to algorithmic bias and automatization, it becomes apparent that gender is negotiated intensively and along very contradictory scenarios. From this, we demonstrate how gender stereotypes are addressed, negotiated, and transformed in this discourse.

4.1 Discriminating machines or compensation for human bias

The most striking interpretive frame in the news media discourse on AI, gender, and work closely links AI to the issue of algorithmic discrimination. A great deal of articles discusses or at least mentions the possibility of algorithmic discrimination and bias, typically by referring to examples such as face recognition algorithms that fail to identify faces with dark skin tones, or translation software that offers gendered translations of professions such as “nurse” or “doctor”. These examples, which are often taken from scientific literature, illustrate AI’s potential for gender-based, racist, and intersectional discrimination and illustrate the idea of biases introduced through training data. While many articles mention algorithmic bias as a possible downside of AI, in some cases, discrimination is even presented as a characteristic feature of AI. For example, one article introduces the topic as follows:

“Almost any system based on the algorithms can act in a discriminatory way. It starts with programs that calculate creditworthiness and doesn't stop with application processes. Why this is a problem—especially for women.” (Welt Online, 21.12.2020).Footnote 4

Algorithmic bias is often discussed in junction with personnel selection and hiring systems. It is generally assumed that that AI will play a big role in Human Resources, thus shaping future workforces through personnel selection and job placement. Often, this is interpreted as a significant problem for gender equality. The key illustrative example of this frame is the cautionary tale of Amazon’s hiring algorithm (Dastin 2018), which is cited time and time again to show that algorithmic discrimination in Human Resources can lead to the further exclusion of women from the workforce. Die Welt retells the story as follows:

“What was actually intended to select the best applicants, turned into a boomerang for the e-commerce giant. Because what Amazon has sold us here as artificial intelligence is little more than machine learning. What happened? The largest online department store’s algorithm to automatically select the best applicants. But because it feeds the computer with data from job applicants over the past ten years, the algorithm systematically favored men. A mistake that can also affect applicants who live in a bad neighborhood or who have the wrong skin color. Thus, prejudices cast in data are reinforced.” (Die Welt, 25.10.2018).

In this case, not only gender-based, but also racial discrimination is envisioned as a possible consequence of AI applications in HR. Algorithmic bias is the media discourse’s predominant gender frame in conjunction with AI and the future of work. Applying a binary concept of gender, AI systems are problematized as a potential risk for women in the workforce, as they carry the power to reinforce societal stereotypes and historic injustice deeply embedded in data.

In contrast to this, however, there is a second frame where AI is presented as having the potential to correct human bias. In the context of work, for example, the potential of using AI in recruitment is sometimes discussed as a possible way to counterbalance human prejudice: “[A] machine that is fed with many different data is definitely much more objective. At least the blatant discrimination becomes less likely.” (Zeit Online, 09.02.2018). A further dissemination of AI applications in Human Resources is presented as an opportunity for gender equity. Examples range from software that helps to identify and correct wording in job advertisements that is assumed to be more appealing to men than women candidates, to algorithms that show the benefits of hiring women chief executive officers.

This second, less dominant, frame stands in stark contrast to the narrative about algorithmic bias, as it presents AI as a possible way to enhance gender equality in the workplace. It is striking, however, that the articles do not juxtapose the two views. Whether AI's relationship to power is a reinforcing or a neutral one is usually set as a premise. To put it bluntly: AI is either fundamentally suspected of being a discrimination machine, or it is presented as objective counterweight to human bias.

4.2 Contradictory scenarios of automation and enhancement

In the German news media discourse on AI, the second key issue regarding the future of work and gender is the impact that AI is likely to have on occupations and workforce composition, specifically scenarios of automation and enhancement. The fear that human work will be replaced through further rationalization and automation is a concern that can be observed repeatedly in future scenarios on new technologies. Regarding scenarios of substituting workers with automated processes and robots facilitated by AI, we were able to identify two contradictory gender frames.

The first is that automation related to AI primarily poses a threat to male-dominated occupations. For articles that argue within this frame, it is “moderately educated men” (FAS, 30.04.2017) and “blue collar worker” (FAZ, 05.12.2021) working “more physically demanding jobs” (FAZ, 13.10.2017) that are to be affected most by job loss. This scenario refers to specific stereotypical and polarizing characteristics that are attributed to men and women in the articles. Men are attributed to have rather specific talents and to be able to only focus on specific tasks and projects whilst ignoring the context. This is said to become a risk for men workers: “That's why [machines] will take over men's jobs for now, because men are easier to replace” (Die Zeit, 28.03.2018). In contrast, women are portrayed as communicative networkers. Soft skills and a “self-confident way of dealing with uncertainty” assets they’re said to bring to the workplace. They are “less afraid of the digital transformation […] more creative, innovative and have emotional intelligence” (FAZ, 06.03.2019). Enhancing the “collective intelligence” (FAZ, 18.05.2021) of interdisciplinary teams makes women valuable for the development and application of AI: “Women don't do what the machines tell them but question them and use their artificial intelligence to augment their own.” (Die Welt, 09.06.2020). Within this interpretive frame, men working in industrial jobs are portrait to be rightfully afraid of AI as they will evidently lose their job due to automatization: “Men fear being replaced by robots much more than women. The reason for this could be that they more often than women have physically demanding jobs that are comparatively easy to automate.” (FAZ, 13.10.2017). In a couple of articles that appeared in the period after the election of Donald Trump in 2016, this was even discussed in relation to white men industrial workers’ affinity for populism, framing AI as a potential risk for democratic cohesion.

Other articles frame the future of work in exactly the opposite way. They state that “jobs with a high proportion of women could be particularly affected” (Frankfurter Allgemeine Woche, 03.05.2019) and that “women are affected 3.1 times more than men” (FAZ, 19.02.2021). For salespeople, receptionists, and bank clerks, it is argued that “If the widely developed scenarios come true, in which digitalization brings wealth to a few while the middle class struggles with relegation, many women will be hard hit.” (SZ, 18.06.2018). In the context of this interpretive frame women are attributed stereotypical deficits regarding their ability to cope with technological change. These include a “lack of interest” (Die Welt, 02.10.2018) in STEM as well as too much reticence in representing their own interests. Accordingly, articles point to the need for programs that aim at encouraging girls and young women to take an interest in technology and consider fields such as AI for their career choices.

In contrast to the fatalistic scenario of job loss, a third frame interprets the future of human labor as enhanced by AI technologies and specifically include tasks from care professions, where women still make up the majority of the workforce in Germany. For example, articles report on exoskeletons that can prevent back pain for nurses (Zeit Online, 18.11.2019), of care robots that entertain nursing home residents (Welt Online, 11.11.2018), and of emotional recognition software that might one day support kindergarten teachers (SZ, 04.10.2019). It is striking that technologies are not presented as a threat to the workforce, but as a support that can improve working conditions. Considering the ongoing shortage of skilled care workers in Germany it seems improbable that AI and robotics could pose a risk for people in care giving professions. Instead, even rather quaint uses of AI are presented as a possible relief.

Within these three interpretive frameworks, conflicting scenarios are drawn as to how AI might change work in relation to gender. As with the conflicting interpretations of AI regarding biases, we see an AI discourse that makes contradicting predictions for the gendered impacts of AI in future work environments. As we will see further in the next section, gender representations and stereotypes are renegotiated in the process.

4.3 Shifting gender stereotypes?

In conjunction with AI and the future of work, the news media discourse invokes notable representations of femininity and masculinity, women and men AI professionals, and gender relations. In AI discourse, gender stereotypes are being revisited and reworked, as evidenced first by the discussion of gendered AI artifacts and second by how women and men are portrayed as AI professionals and as part of future labor forces.

A recurring theme in journalistic works on AI is gendered assistance software and robots, specifically artifacts that are designed to evoke notions of femininity. Articles describe possible future uses of these artifacts through embellished episodes where, for example, a waitressing robot or an AI that conducts staff interviews is attributed stereotypically feminine attributes. Furthermore, voice assistants are presented as an opportunity to “outsource” feminized emotional labor:

“Anyone who wants to talk to someone who is always polite, obedient, helpful, who has an infinite attention span and hardly any needs of her own, can turn to Siri. When she does emotional work that women otherwise traditionally do, we are left with more time and energy for what we really want to do.” (Zeit Online, 16.08.2019).

The submissive service robot substituting women service workers appears as a relic of traditional gender roles within the future of work. Across different occupations and areas of application, these descriptions have some things in common. Not only do the technologies described often have female names and “work” in women dominated professions, it is striking that they are depicted as being able to do a better job than humans/women. For example, a robot waitress is described as follows:

“Bella is every employer's dream: she doesn't get sick, doesn't need holidays and doesn't ask for a pay raise. In addition, she is always in a good mood and never goes on strike. She deftly maneuvers around obstacles and reliably delivers food and drinks.” (taz, 10.12.2021).

As we will discuss more thoroughly in the next section, feminized robots are framed as something that perpetuates sexist gender images, but also relieves women of this work.

Concerning human professionals in AI, the articles frequently mention that the proportion of women in IT professions in Germany is far from equal. Nonetheless, women are becoming visible in the discourse as IT professionals. They especially often take on an expert role explaining AI and raising awareness of the issues involved. The articles regularly feature both men and women experts from business, academia, and other organizations. However, women AI experts are presented in a specific way: Readers learn about their origins, education, professional careers, special talents, and skills. In contrast, men experts are soberly introduced by stating their profession and organizational affiliation. Women AI experts are portrayed as contributing critical, and often intersectional, knowledge on gender and technology. They are women who “wanted to learn more about people—and so ended up in tech” (FAS, 12.05.2019). As book authors, they aim at helping people lose their fear of AI while at the same time develop a critical awareness of its risks. As researchers, they take up interdisciplinary projects and consider AI in connection with ethics. As managers, they put together diverse and interdisciplinary teams that create better, more ethical AI. For example, a manager of a leading German telecommunication company is quoted: “I also brought in a Korean, two employees with American roots, a Croatian [woman] who is also a psychologist. The mix must be as broad as possible, including in training.” (FAS, 25.02.2018). Women experts thus are presented as role models in a double capacity: as women who have made it in the tech world, but also as AI professionals who care about AI’s societal consequences. It should be noted, however, that men experts also sometimes highlight risks related to discrimination and the importance of diversity. This is by no means a discussion that is only brought up by women.

While feminized robots take over thankless service jobs and women AI professionals are presented as multitalented experts—with a name and a personal background—, men AI professionals are part of an anonymous group: the young white men developers. This is closely associated to the interpretative frame that equates AI and algorithmic bias. Most articles attribute the existence of algorithmic bias to developers, and specifically point out that it is often young white men working in homogeneous teams: “The prototypical developer is white, Western, male, young. […] He transfers his perspective on the world to the development of technology. That's fatal.” (Der Spiegel, 14.08.2020). This reasoning can be found in many articles: Homogeneous development teams that work within a toxically masculine atmosphere are guided by implicit biases in the development of AI models. Moreover, their lack of awareness of discrimination leads to the use of bias-laden training data and a failure to recognize that the resulting applications may harm women and minorities. In some instances, the potential risks of AI are solely attributed to the positionalities of developers. In other cases, however, this notion is complicated or even to some extent challenged through a broader concept of homogeneity that includes not only gender, race, and age, but also the disciplinary background of developers. In both cases, diversity is presented as a key solution towards a more equitable and ethical AI: “To counter discrimination, development teams must also become more diverse.” (SZ-Beilage Jetzt, 23.10.2020). Young white men developers are represented as naïve at best and sexist “brogrammers” at worst, and they are attributed great agency over the algorithms they create. Methodological issues of debiasing are only addressed in a few articles.

Finally, some articles discuss the human resources available to the German economy to generate growth and prosperity. For a labor market confronted with demographic change, women are seen as a reserve army waiting to be activated, because “the digital transformation and the associated increase in productivity are not sufficient to close the gap between the demand for and supply of skilled labor” (FAS, 10.07.2016). Furthermore, women’s relative absence as AI professionals, and especially as founders, is portrayed as a comparative disadvantage in multiple respects. It is bad for women’s careers and gender equality, as AI is seen as a field of work where jobs are secure and there are great opportunities for development. Moreover, the absence of women in AI is seen as having negative consequences for AI, as diverse perspectives are assumed to lead to better and more ethical AI. In this regard, excluding women from the AI workforce and having only few women founders is discussed as a risk for further economic growth.

In summary, contradictory scenarios are presented within news media discourse, within which the future of work is negotiated through gendered representations such as the soon-to-be unemployed industrial worker, the feminine service robot, the young white men programmer, and the women AI expert.

5 Discussion: gender reflections in public AI discourses—and their limitations

Building on the empirical findings regarding how AI, gender and work are interpreted in German newspapers, we will now move on to the consequences this framing has for the debate about the gendered implications of AI on the future of work. As we have shown, there is a lively discussion of AI, work, and gender in the German media. Some aspects of this debate reflect the characteristics of German technology discourse. Its backdrop is a shortage of skilled labor, which is expected to increase due to demographic change, the under-representation of women in technical professions, and Germanys future as an economic stronghold. Thus, the discourse is not limited to specific problems of gender, AI, and work, but linked to major social issues and questions of fairness, justice, equality, and democracy. At the same time, it takes up many aspects of the international debate. Four points seem to us to be particularly worthy of discussion:

First, the news media discourse mirrors, to some extent, the feminist and scholarly discourse on gender and technology. We were able to identify that critical feminist and anti-racist perspectives on AI are being reflected in public discourses. Examples of feminist insights that have been taken up in news media discourses are the critical examination of gendered artifacts such as Siri and Alexa (Adams 2020) and concepts like “algorithmic oppression” (Noble 2018) or the “feedback loop” between discrimination in data science and algorithmic discrimination (West et al. 2019). These modes of questioning the trajectories of AI are brought into the discourse by (women) experts, but also by journalists. In this regard, Amazon’s AI-based hiring algorithm acts as a shorthand for the ethically questionable consequences AI might have in relation to gender and race.

In this regard, our analysis shows that the public news media discourse on AI, gender and work conceives of technology as social and inscribed with power relations. Research has shown that “Silicon Valley” tech industry ideologies (Zuboff 2019), policy discourse (Bareis and Katzenbach 2021) and the “discourse industry” of management consulting firms (Schlogl et al. 2022) depict AI as an inevitable development, thus working with a deterministic frame. In contrast, our results support the claim that the media present AI as shaped by societal forces and therefore open to democratic debate (Köstler and Ossewaarde 2022).

In this specific section of the media discourse on AI, gender and work, AI is almost exclusively portrayed as socially constructed, in the context of which a specific form of gender knowledge (Cavaghan 2010), plays a crucial role. The articles we have analyzed neither naturalize nor neutralize gender. Rather, the discourse refers to a differential logic of positionality: It is assumed that structural social positioning shapes people’s worldview, actions and thus the technologies they design. This is particularly evident when we look at the discourse of the privileged young white men programmer. In conclusion, the discourse at hand echoes both a social constructivist view of technology (MacKenzie and Wajcman 1985; Pinch and Bijker 1984) and feminist standpoint theories (D’Ignazio and Klein 2020; Haraway 1988; Harding 2004).

Second, when looking at the implicit and explicit assumptions about gender that guide the material, the prevalent notion of gender is that of a diversity category. To some extent, gender is even conceptualized as an intersectional category, where gender is understood as interacting with other power relations. This can be attributed to the work of researchers who have pointed out that AI specifically makes Black women and Women of Color invisible (Gebru and Buolamwini 2018; Noble 2018). Owing to the successful advocacy in academic, policy, and cultural spaces, authors like Criado-Perez (2019), D’Ignazio and Klein (2020), and O’Neil (2016), and filmmakers like Shalini Kantayya (“Coded Bias” 2020), these findings have entered the public debate. The price, however, seems to be that their ideas are tied into a diversity discourse that emphasizes not only ethics and justice, but also the economic utility of diverse perspectives to provide legitimacy to the issue.

With these feminist academic and political perspectives on AI as a backdrop, we were able to identify three important points that the public discourse omits. To start with, post- and decolonial perspectives are not addressed in the gender-related media discourse. In fact, the non-Western world is addressed here only very sporadically in the form of development rhetoric. The scholarly engagement with AI from decolonial perspectives (Adams 2021; Mohamed et al. 2020), on data crowdwork in the global south (Altenried 2020; Gray and Suri 2019), the exploitation of human data (Iyer et al. 2021), indigenous knowledge (Maitra 2020), or natural resources (Bender et al. 2021; Crawford 2021), is not addressed, and neither is decolonial activism on AI. Furthermore, perspectives on AI that are critical of racism are predominantly taken up on the example of criminal justice in the US. They are not systematically included in the discussion of the future of work in Germany. In addition, and this is particularly interesting for the German context, the discourse does not reflect on differences between East and West Germany, despite significant differences, both from a gender perspective and economically.

Next, it must be noted that the news media discourse is still characterized by a binary concept of gender. Women and men are attributed gender-specific characteristics and skills that they can or cannot utilize when it comes to AI. Interestingly, within this binary gender concept, however, both genders are inherently contradictory in terms of the characteristics attributed to them. Men are both masters of technology and threatened by technological advancement; women are both perfectly skilled for the digital future and too restrained regarding technology. At the same time, modes of existing beyond the binary concept of gender are rendered invisible. The obstacles that, for example, the trans population is confronted with in terms of AI (Keyes 2018), are completely left out of the media’s conversation.

Finally, there is a range of topics that is relevant to the future of work but hardly covered in the investigated discourse. The articles focus their reflection on the gendered implications of AI on two perspectives: work that is made redundant through AI applications, and work that is facilitated by AI applications. The topic of algorithmic management is however largely ignored. But as research shows, applying AI-based systems does not necessarily result in work that is less physically demanding and repetitious. On the contrary, algorithmic control can ensure that work is intensified. With workflows closely guided by algorithmic systems, workers need fewer qualifications and are thus easier to replace (Schaupp 2022). For marginalized workers, such as migrant women, people with disabilities, or people with care responsibilities, this has highly ambivalent consequence. This aspect, however, has hardly been acknowledged in public discourse so far.

Third, that the news media discourse on gender, AI, and the future of work, is not devoid of gender stereotypes. This is not limited to classic gender and technology stereotypes, but gender stereotypes are also being transformed. When it comes to stereotypical representations of gender, robots that embody gendered characteristics of service workers are worthy of further discussion. From the theoretical perspective of the ideal worker norm (Acker 1990), which men usually are assumed to fulfill better than women, the service robot emerges as a new competition for the women worker. AI-based technologies fulfill feminized skills better, cheaper and with less resistance than women. By synthesizing together feminized emotional labor and the ideal worker, the feminized service robot appears to be “[t]he abstract, bodiless worker, who occupies the abstract, gender-neutral job, has no sexuality, no emotions, and does not procreate.” (Acker 1990).

At the same time, while the robot does not mind, tedious emotional labor (Hochschild 1983) is presented as becoming less and less acceptable for women. This can be interpreted as promise of an AI-enabled working world in which mistakes never happen and neither human obstinacy, illness nor political awareness can disrupt the work process. A world in which customers continue to be offered the all-round feel-good program that already seemed to have fallen out of time in terms of gender roles.

Fourth, we have shown that the idea of young white men developers bringing their biases to AI is one of the discourse’s central narratives. As we have argued above, this reflects gender knowledge both inspired by standpoint theories and a social constructivist view of technology. The articles do discuss insufficient and biased data as well as other epistemological problems. When it comes to solutions, however, the focus is on the homogeneous composition of development teams, thus omitting a great many power relations within the organizational structure of companies and capitalist chains of production. The production of AI systems has many stakeholders. It includes at least the initial product idea, design, marketing, data preparation, model development, and testing, spans horizontally across all hierarchical levels from the boardroom down to outsourced crowd workers, and vertically includes relationships with open source communities, customers and end users. Finally, data centers, the necessary hardware and the regulatory framework need to be considered as well. To single out one of the elements strikes us as naïve. Miceli et al. (2022) have reviewed the growing body of bias research and came to a similar conclusion. They found that bias research has focused primarily on data workers and their individual bias (see, e.g. Hube et al. 2019). Although it is interesting that that research looks even further down the organizational hierarchy than the media discourse, both share the problem of ascribing the agency to one group involved in the development of AI while ignoring organizational power dynamics.

This raises the question of whether diversity is really the solution to AI problems that it appears to be. The media discourse on gender, AI, and the future of work frames this as an instrumental approach to diversity, where women and minorities are to be included to ensure "good" and “ethical” AI. However, this notion is not supported by the critical feminist research on the intertwined dynamics of algorithmic bias and the AI workforce, where the intertwined dynamics of algorithmic bias and the AI workforce are discussed in relation to power. Moreover, the concept of bias is flattened, omitting the differences between biases of a technical, sociotechnical, or social nature (Lopez 2021). Finally, when we look at the AI and the future of work, algorithmic bias is hardly the only and probably not the most important issue to tackle. As Balayn and Gürses (2021) have argued, “AI and the increased dependencies on dominant computational infrastructures may intensify inequalities not only at the level of ‘algorithms’, but more structurally through the reconfiguration of organizations, democratic institutions and economic relationships” (p. 21). This shows that the relationship of AI, gender equity, and the future of work is much more complicated than is being negotiated in the news media discourse.

6 From discourse to design and regulation?

In liberal democracies, the societal impacts of technological shifts have always been contested, and the relation of technology and gender is by no means determined on a single path. This is evident in the state of research into the various ways work is increasingly being managed by AI, how workers interact with AI and how AI is created. News media discourse speculates about how AI will change the work of the future, and in doing so, the relation of gender and technology is renegotiated.

As the results show, gender is a relevant dimension in the investigated German news media discourse on AI and work. It has become clear that it is by no means only stereotypes that are invoked here; rather, it is evident that scientific findings from gender studies, feminist AI studies, and STS have contributed to a critical understanding of AI. Despite the constant arguments of women’s hesitancy towards technology, images of technology and gender are changing, with new stereotypes taking the stage. Even though many important questions about the gender implications of AI on the work of the future are not yet sufficiently illuminated in the news media discourse, the critical feminist technology discourse of the last decades obviously had an impact on the perception of new technologies, thus influencing public expectations associated with AI.

Understanding AI discourse as performative, this raises the question of whether this opening of the discourse to gender also influences the design of AI. It is conceivable that the discursive starting points for sensitization could also result in design decisions in the long term. However, this remains to be observed and further investigated. This includes the question of whether this knowledge reaches AI developers and, in a broader sense, of whether it is embedded in design and product development principles. The production and application of AI are embedded in complex production chains and regulatory frameworks. It is therefore too short-sighted to only consider the social positioning and gender knowledge of developers. Instead, legislation and the resulting approval and auditing processes are expected to play a major role. Another key factor for the world of work is co-determination. In countries like Germany, employees often have a say when it comes to implementing new technologies. Here, too, the media discourses on AI and gender then play a role, because they influence what knowledge the actors who decide on the use of AI in the world of work bring to the table.

As this study was intentionally limited to the analysis of texts that explicitly address gender, there is an obvious need for further research. This concerns, among other things, the analysis of other sections of discourse in which implicit, invisible, and unspoken gender dimensions are touched upon.

Technology, and, thus, AI are negotiable. Our results have shown that critical AI research is in itself performative as it influences how AI is publicly framed. For a just and non-discriminatory AI that benefits human needs, further research is worthwhile, including the active mediation into public discourses.