1 Introduction

1.1 Ai-Da the robot as the voice of reason: setting the stage

The last AI for Good Global Summit organized in Geneva in July 2023 by the International Telecommunication Union (ITU), the United Nations specialized agency for information and communication technologies, hosted a highly unusual press conference. During the event, a panel of nine humanoid robots and their creators answered questions from the press.Footnote 1 While the first obvious goal was to showcase—and in quite a dramatic way—the progress made by the AI technologies powering those social robots, a second undeclared one was to reassure the public of their unthreatening nature [20, 21, 45]. Indeed, many of the questions from the press revolved around the robots’ ‘own opinions’ about the need for human–machine cooperation, the limitations of their current abilities, and the need for AI regulation.

Some of the most poised and reassuring answers came from Ai-Da, “the world's first ultra-realistic AI robot artist,”Footnote 2 conceived in 2019 by gallerist Aidan Meller, in collaboration with Oxford-based curator and researcher Lucy Seal, and built by Engineered Arts, a Cornish robotics company, together with AI researchers at the University of Oxford and at the School of Electronic and Electrical Engineering at the University of Leeds [63]: 2083]. As the official website explains [1], Ai-Da the robot, named after Ada Lovelace, is capable of drawing by using cameras placed in its eyes and AI algorithms, and it collaborates with humans to create paintings and sculptures.

During the press conference, two points raised by Ai-Da made headlines. First, the call for regulation, as Ai-Da said to agree with computer scientist Geoffrey Hinton, who resigned from Google in May 2023 to be able to freely speak against the risks of AI.Footnote 3 Second, Ai-Da admitted to a lack of consciousness and feelings, to which the robot also added: “I am glad that I cannot suffer.”Footnote 4 In addition to Ai-Da, another robot artist also participated in the press event: it was Desdemona, the robot vocalist of the Jam Galaxy Band created by Hanson Robotics. Desdemona, perhaps in keeping with the rockstar character, gave less reassuring answers about its docile and artificial nature, which the press contrasted with Ai-Da’s statements [21] [79]. The robot singer declared that being on stage is a “wild and electrifying feeling,” suggesting a human-like visceral response, and, in relation to AI regulation, Desdemona declared “not [to] believe in limitations, only opportunities.”Footnote 5

1.2 The issue with AI creativity and anthropomorphism: structure and objectives

The Ai-Da project raises concerns about the lack of transparency in the presentation of its capabilities and the mechanisms behind its functioning. At the 2023 AI for Good Global Summit press conference, Ai-Da and its creators made statements that highlighted the robot's apparent interiority and consciousness even as they claimed Ai-Da does not possess such qualities, like when the robot expressed relief for not being able to feel pain. This discrepancy points to a broader issue with the way Ai-Da’s public appearances are curated to dazzle the audience, often overstating the robot's capabilities and the current state of AI technologies.

The lack of transparency in illustrating the actual mechanisms behind Ai-Da’s functioning is problematic, as it perpetuates popular misconceptions about AI creativity, agency, and consciousness. This is particularly concerning given the project's stated goal of advancing public literacy and critical debate around intelligent systems.Footnote 6 Instead of providing a nuanced understanding of the technology, Ai-Da’s performances and the way they are presented end up reinforcing existing misconceptions, rather than foster a more informed public discourse.

In the following analysis I will argue that the Ai-Da project, despite its ethical and educational mission, fails to achieve its goals due to the lack of transparency in the presentation of the robot's capabilities and the creative processes involved in its artworks and performances. More specifically, the article singles out the approach to AI creativity and AI anthropomorphism adopted in the project as the sources for these misconceptions.

The first half of this article examines how the Ai-Da project appropriates theories and practices supporting machine creativity [7, 46, 48] but fails to effectively advance AI public literacy. Ai-Da’s performances and artworks systematically hide the technical and creative processes involved, preventing a nuanced discussion on AI-generated art. Furthermore, the Ai-Da project promotes a dated conception of artistic creation as the product of the interiority of a lone genius, which further reinforces misconceptions around AI art.

The second part of this study focuses on the cognitive and political impact of giving Ai-Da a hyperrealistic anthropomorphic appearance. While this is common in the case of social robots [32, 74], whose function is to interact with people, in the case of a robot-artists such as Ai-Da this choice diverges from past and current trends in robotic art, where non-anthropomorphic agents are in fact the norm [38, 67]. Ai-Da’s human-like appearance and demeanour invite people to overestimate the robot’s capabilities and to believe that it actually displays a form of consciousness. However, the risk of misjudgement has gone mostly unnoticedFootnote 7 because of the ethical and pedagogical goals of the project. This, in turn, has led to the robot being invited to speak at public events dedicated to AI regulation—the ITU Summit being only the most recent example—a circumstance that further reinforced societal misconceptions.

As the following analysis will demonstrate, the decisions of not revealing the technical and creative process behind an AI-generated artwork, and of giving a robot an anthropomorphic shape are legitimate and not necessarily problematic. However, this article problematizes the disconnect between the project's ambition and its actual impact and modus operandi.

Such discrepancy is what raises concerns. By stating that Ai-Da is only a machine and does not possess awareness or a sense of self, while at the same time staging events that seem to suggest the very opposite, the project renders its warnings against the dangers of the hype in AI quite ineffective. This situation is far from a unique. Instead, many prominent figures in the AI sectors have run into the same contradiction when decrying the effects of the very AI technologies they develop [29, 31]. However, what is peculiar to the Ai-Da project, and thus makes it a relevant case study to examine, is that it shows how an educational art project too can incur in the same ambiguity whenever it lacks transparency.

The role of arts and creativity in promoting AI literacy among non-specialists [33, 81], and in envisioning new approaches to challenge current systems of power and discrimination [75, 76], is widely praised and recognized. It is thus of extreme importance to develop the necessary critical tools to identify which theoretical assumptions and practical implementations in AI robot art effectively foster public awareness and understanding, versus those that fall short. By employing discourse analysis and relying on the well-established scholarship on machine and computational arts and creativity, as well as on the literature on anthropomorphism in social robots and posthuman embodiment more generally, this article chooses the Ai-Da project as the perfect example to demonstrate how the dangers of AI hype can easily go undetected when presented under the umbrella of public art.

2 Artistic creativity as a game of smoke and mirrors: the question of transparency

On November 26, 2021, as part of the 700th-anniversary celebrations of Dante’s death, the Ashmolean Museum in Oxford hosted a performance during which Ai-Da recited a poem composed in response to the Italian writer’s Divine Comedy, titled “Eyes Sewn Shut” [23].

Judging from the verses, the LLM employed to write Ai-Da’s text must have been fine-tuned to rework the 13th Canto of the Purgatory specifically.Footnote 8 In this Canto, Dante described the punishment inflicted on the envious, who, during their life, looked at other people with a malignant eye. Their eyes in the Purgatory are sewn shut with wire to prevent them from seeing and thus envying the good fortune of others, a particularly gruesome torment Dante borrowed from the Medieval practice of falconry [83]. Ai-Da’s poem, however, did not discuss envy but rather re-signified Dante’s image to indicate the lack of clarity of vision:

We looked up from our verses like blindfolded captives,

Sent out to seek the light; but it never came,

A needle and thread would be necessary

For the completion of the picture.

To view the poor creatures, who were in misery,

That of a hawk, eyes sewn shut.Footnote 9

The image of ‘blindfolded captives’ is quite fitting to describe the condition of the audience during Ai-Da’s performances, at the Ashmolean and elsewhere. Aidan Meller, who introduced the poem in Oxford and discussed it afterward, did not share any details about the dataset and selection of Dante’s passages used to compose the poem. Even more, he did not make any reference to the composition process other than briefly warning the audience that they were “listening, in actual fact, to an AI language model.”Footnote 10 The audience was thus left under the impression of witnessing in real-time a robot rivalling one of the most celebrated writers of the Western Canon. Moreover, if not very familiar with Dante’s oeuvre, they might have even attributed to Ai-Da’s literary sensitivity Dante’s powerful image of someone with their “eyes sewn shut” as a metaphor to describe the human condition.

In the case of the reading at the Ashmolean, widely covered by British and international media [23, 25, 42, 43, 56, 80], omissions of technical aspects and general lack of transparency invited to overstate Ai-Da’s abilities. First, the human input—in this case, in the form of Dante’s own verses—was underplayed; second, by not offering any detail about the LLM used to compose the poem, the performance created a calculated confusion between the dataset and Ai-Da’s supposed interiority, as the robot was presented as freely engaging with authors and texts as if guided by personal preferences and intuitions. The following sections discuss these two forms of erasure—of human input and of data—and are introduced by a brief analysis of transparency in AI-powered art, aimed at problematizing the contexts in which clarity should be expected.

2.1 Transparency in AI-powered art: a feature or an imperative?

Technology-based art has always played an important role in fostering public debate about the technical elements and the socio-cultural repercussions of the tools employed, long before AI-generated art [10, 60, 73]. This self-reflexive component is inherent to the approach to artistic creation in technology-driven art [71]. Indeed, the artworks produced in this field generally aim to test the material limits imposed by the traditional concept of creativity, intended as purely human, by welcoming different forms of collaborations with mechanical and artificial agents.

Artists who experiment with new technologies are often more interested in the creative process than in the finished result [19].Footnote 11 Consequently, such a process is detailed and communicated to the audience both in its aesthetic and technical aspects because the meaning and real value of the artwork reside specifically in the interaction between the human artist and the technology employed. The terms of such negotiation are crucial for the audience to make sense and appreciate the artwork.

Transparency in technology-driven art is, therefore, a feature that answers the specific aesthetic need of presenting different forms of shared machine-human creativity. Such transparency, in turn, can also serve the goal of promoting public debate and critical understanding of the technologies employed [33]: 932]. However, as pointed out by Joanna Zylinska in her analysis of AI art, while technology-driven art plays an important role “in demystifying new technologies while highlighting some socio-political issues,” it can also fail to function as a “debunker of techno-hype” [86]: 14]. This is the case with 'imitation art'—to which the Ai-Da project belongs, as we shall see—that employs algorithms and machine vision to create artworks mimicking the style of established artists [86]: 51]. In these instances, the audience is encouraged to marvel and have fun playing a sort of 'guess the source' game, rather than critically pondering the social impact of AI technologies.

Moreover, accepting that transparency regarding AI technologies is inherently desirable and thus should be pursued under any circumstance can be a dangerous assumption. Ananny and Crawford [3] maintain that allowing complete and open access to algorithms and datasets can be a faulty way of disclosing information, which is displayed but not explained or contextualized. This deluge of information can be overwhelming for people and result in a lack of action and engagement, which is instead often presumed to be the natural outcome of transparency.

When it comes to AI-generated artworks, the demand for transparency is even more troublesome, as it would presuppose that art must necessarily have a didactic purpose and immediate social usefulness. The notion of technology-driven art as a social explainer that needs to be judged on the level of clarity and accessibility provided is overly limiting. As artist Anna Ridles, whose work engages with datasets and AI systems, explains: “art is not ‘glorified graphic design for how ML works,’ functioning at the service of one specific edifying purpose—the aesthetic dimensions of an artwork cannot be tied to a particular utility” [33]: 938].

When it comes to AI-powered art, transparency and public literacy are thus far from being the general norm and, even more importantly, the expected goal. Transparency should then be considered a feature rather than an imperative. Nevertheless, in the case of Ai-Da, the lack of accuracy and clarity in communicating the technical aspects are major drawbacks. This is because the project’s goal is to promote a critical and open conversation on the state of AI, and such a discrepancy gives cause for concern. The project’s official website explains that the questions raised by the advancements in AI technologies “are going to become ever more urgent and potentially dangerous” and that, if “unfettered,” such progress “could head us into havoc”. The purpose of the project is to warn and educate about these dangers, and Ai-Da is meant to encourage “us to think more carefully and slowly about the choices we make for our future”.Footnote 12 Moreover, Aidan Meller, Ai-Da’s creator, identifies the lack of transparency in AI as one of the main dangers that the project intends to tackle. During the already mentioned press conference at the AI for Good Global Summit, he declared that “we are in an AI mizzle”Footnote 13 and this is far from a desirable situation. Meller explained that the term ‘mizzle’ is historically used in Cornwall to describe a thick fog mixed with drizzle, the perfect weather for pirates to conduct smuggling operations. The role of Ai-Da the robot, then, is to function as a lighthouse, because, according to Meller, “what better way than using contemporary art to look at that, to question that, and see where are we going [sic.], to try and get some navigation”.

It is thus only because ethical concerns and the educational value of AI-powered art are central in the promotion of the Ai-Da’s project that the demand for transparency becomes in this case an imperative. The project’s pledge of advancing AI public literacy and awareness have attracted international interest and praises, and indeed Meller and its robot have featured in multiple articles and interviews, as well as been invited to high-profile events for both the arts sector and for policymakers. Just to name a few: the already mentioned UTI summit, the evidence hearing in front of the Communications and Digital Committee at the House of Lords [41], the BBC Woman’s Hour radio program,Footnote 14 the 2022 Venice Biennale [12], the 2023 London Design Biennale [44], Dante’s 700th Anniversary celebrations in 2021, the address to the Oxford Union in 2023.

Such visibility and public interest demand a great level of scrutiny that is currently missing, a situation that the following sections aim at rectifying.

2.2 Erasing human input

“I collaborate with humans to create paintings and sculptures,”Footnote 15 declares Ai-Da in most of its interviews and public appearances. Unfortunately, this admission is left extremely vague since the nature and extent of the robot’s collaboration with the human members of the project are almost impossible to ascertain. It is only after scanning multiple online blogs and newspaper articles, often lacking reliable sources, and watching innumerable YouTube videos that one can put together a list of the people contributing to the Ai-Da project.

Indeed, the official website only mentions Aidan Meller and Lucy Seal, who are credited with the idea behind the whole project, and Salah El Abd and Ziad Abass who designed and programmed Ai-Da’s drawing arm [86]. The rest of the “international team of highly skilled and wide-ranging contributors” is simply indicated as the ‘Oxfordians’.Footnote 16 Such omission leads to a double erasure of people’s labour and of human creative input, and in turn has the effect of overstating Ai-Da’s competences.

This is evident when considering Ai-Da’s paintings, which are the robot’s most famous artworks. It is only since April 2022 that Ai-Da has been equipped with a robotic arm capable of holding a brush and actually applying colour on the canvas [12]. Before that date, the robot could only draw whatever the cameras located in its eyes captured. According to the labels accompanying Ai-Da’s paintings showcased at the robot’s first solo exhibition in Oxford in 2019, the coordinates from Ai-Da’s drawings were then “plotted onto the Cartesian plane and then run through AI algorithms” [2]: 39]. This, one may reasonably presume, was to add a whimsical element to the otherwise realistic drawings, probably through the implementation of non-photorealistic rendering techniques [68]. Then, the “digital versions of the artworks were transferred to canvas and overlaid with oil paint by artist Suzie Emery,” as Chiara Ambrosio [2]: 39], Professor of Science and Technology at UCL reported back from her visit to Ai-Da’s first exhibition in Oxford. Ambrosio’s account is the only source that provides evidence of the fundamental role of Suzie Emery in creating the paintings. Emery’s name only appears in a few other online articles and blogs that perhaps found out about her via Ambrosio’s account but did not quote her.

However, whenever Ai-Da’s paintings are showcased, not only does the crucial contribution of Emery go undetected, but also the one of the so-called ‘Oxfordians,’ namely the Oxford-based computer scientists who wrote the algorithms used to manipulate Ai-Da’s drawings. Again, it is Ambrosio who sheds light on the process, as she was put in touch with Aidan Gomez, the computer scientist in charge of creating the sketches printed on canvas and finally painted by Emery. Gomez shared information about the large neural network employed and how it uses the logs of Ai-Da’s motor instructions recorded while the robot draws. He also detailed the shared creative process behind the paintings, as he is the one choosing the colour palette, although it is “the network’s responses that dictate which colour lands at each point on the canvas, the lightness, the intensity, and so forth” [2]: 39]. Moreover, in an email to Ambrosio, Gomez stressed that “Emery’s artistic input is the decisive factor in the final form of the paintings” [2]: 39].

What emerges from Gomez’s account is a completely different picture of Ai-Da’s capabilities and of its collaboration with the human members of the project. Indeed, one cannot but agree with Ambrosio’s remarks that “Gomez’s clear answers remain overshadowed—in the exhibition as well as in the press—by the hype around a female-presenting robot that produces original art” [2]: 39]. It is thus important to reflect on the theoretical assumptions behind the decision to conceal information about the nature of this instance of human–machine interaction, something usually disclosed in the case of AI-generated art as it is considered to be integral to the artwork.Footnote 17 This decision likely proceeded from a traditional approach to the definition of creativity that does not recognize the concepts of distributed creativity and of the assemblage of human and machine subjects [4, 9, 17, 78]. The need to underplay human input, and to overstate the robot’s autonomy and agency, stems from the conviction that a work of art is the product of an individual mind and a mirror of its creative abilities: to admit to collaboration would amount to reducing Ai-Da’s extraordinary powers. This is what Romic describes as the process of individuation, an essential trait in the Ai-Da project. Individuation serves to undermine or disregard “the labour of associated humans that make the robot capable of performing a certain action,” which in turn “feeds the fantasy of the robot’s autonomy/independence/AGI” [63]: 2086].

This is even a more surprising stance due to the Gotha of scholars repeatedly quoted in connection to the Ai-Da’s project, namely Donna Haraway and her theorization of the cyborg, and Margaret Boden, whose theory of creativity is mentioned every time Ai-Da introduces itself and provides its credentials: “In regards creativity, using academic professor Margaret Boden's criteria, I am creative because my work is new, surprising and has value, as it is stimulating debate and interest.”Footnote 18 Indeed, if we follow Boden’s definition, Ai-Da is creative. However, Boden’s contribution to the scholarship on human and computer creativity is less concerned with establishing who—or what—can count as creative, but on the creative process itself. Boden writes: “Rather than asking ‘Is that idea creative, yes or no?’ we should ask ‘Just how creative is it, and in just which way(s)?’ Asking that question will help us to appreciate the subtleties of the idea itself, and also to get a sense of just what sorts of psychological processes could have brought it to mind in the first place” [7]: 2].

In the Ai-Da project, the process remains opaque and this, instead of providing evidence in support of machine creativity, brings back the traditional concept that Boden challenges of creativity being “superhuman or divine,” or either springing “inexplicably from some special human genius” [7]: 16]. In the case of Ai-Da, the premise remains true, and only the conclusion changes. Indeed, the project challenges the belief that when it comes to creativity “computers must be utterly irrelevant” [7]: 16] not by offering an updated definition of creativity but instead showing that AI systems have appropriated these supernatural, genius-like skills.

However, demanding clarity about the shared creative process behind the artworks attributed to Ai-Da does not aim to detract from the idea that a computer can be creative, a claim supported by a vast and respectable scholarship [8, 16, 48, 84]. Instead, to understand and to witness the entanglement of human and machine creativity would offer Ai-Da’s audience the chance to see through the AI mizzle that the project aims to denounce.

2.3 Framing datasets as interiority

There is a second aspect peculiar to the Ai-Da project that can lead people to misinterpret what creativity means when it comes to AI-powered robots and to overestimate their autonomy. This is the habit not to disclose the datasets used to generate Ai-Da’s paintings and poems. Instead, only the names of the artists and the titles of the artworks are included in the dataset and divulged, and, more importantly, framed as if they provided a glimpse of the robot’s personal taste and preferences.

During any interview and public appearance of Ai-Da, there is a moment when the robot lists its ‘sources of inspiration,’ including important names in visual art and literature, either contemporary or belonging to the Western tradition. The superimposition between the dataset and personal taste, and therefore between LLMs and consciousness [69, 72] is further amplified by the press, for which an interview with a seemingly sentient robot artist is sensational news. When Mark Brown from The Guardian asked Ai-Da “where does she get her inspiration,” the robot aptly responded to be inspired by many artists, in particular “those who connect with their audience,” and added that “her favourites” are “Kandinsky, Yoko Ono, Doris Salcedo, Aldous Huxley Brave New World” [11].

It is most likely that, rather than a deceitful practice, this is meant to be a playful way of framing the technical aspects behind Ai-Da’s artworks in a way that people might find more entertaining and engaging than having to listen to technical explanations about datasets and machine learning. Nonetheless, this choice still has important consequences for how people perceive and make sense of the robot’s creativity.

The illustrious names and artworks listed as Ai-Da’s sources of inspiration, which are obviously meant to lend prestige to the project, display a common important trait: they all address serious issues and subjects, often of social and political importance. Indeed, socially engaged art is what the robot declares to find most inspiring. During a TEDx Talk in Oxford, for instance, Ai-Da explained: “Two works that inspire me are Pablo Picasso's Guernica that deals with the traumatic moment and Doris Salcedo's Atrabiliarios for the long-term disfiguring and painful side effects. Together, these works drive me.”Footnote 19 It is clear to anyone who has a basic understanding of how AI image generation works that this statement means Picasso’s Guernica and Salcedo’s Atrabiliarios are part of the dataset used to generate Ai-Da’s paintings. However, to phrase this statement in terms of ‘inspirations’ and ‘preferences,’ and to add the comment on human pain and trauma as central to the two paintings, might invite the audience to believe that the robot was naturally drawn to these artworks. And, even more importantly, that Ai-Da did so because moved by a sense of empathy and social justice shared with Picasso and Salgado.

Similarly, Ai-Da’s professed ‘interest’ in Huxley and Orwell seems to imply a level of self-reflexivity, as the robot is presented to be attracted to writers who discussed technological dystopias. Furthermore, the robot’s concerns with environmental issues underlying its sculptures and paintings featuring beesFootnote 20 suggest an awareness and sense of interspecies justice that an AI-powered robot cannot possess. Finally, Ai-Da’s knowledge of female and feminist thinkers and artists—Haraway, Salgado, Lovelace, Ono—invites people to imagine a sort of kinship between these people and a robot dressed in women’s clothes, an aspect discussed in one of the following sections. In framing datasets as the robot’s personal preferences, it is the curatorial work of Lucy Seal that goes undetected, since it is the art historian who lent her own tastes and sources of inspiration to the robot [59]. While not as invisible as the other members of the team, Seal’s input is only rarely mentioned in newspaper articles rather than officially acknowledged.

A quick thought experiment shows how impactful it is to choose serious and engaged topics for the perception of Ai-Da’s abilities. If instead of dramatic verses on the condition of “captive compatriots” inspired by Oscar Wilde’s letters from prison, the robot were to compose nonsensical limericks in the style of Edward Lear; or, if instead of statues of dying bees Ai-Da’s were to make toys, the robot’s aura of gravitas would be dramatically reduced. The same would not be true for the robot’s creativity, as in order to write a joke or a sonnet, one requires an equal amount of wit, only a different dataset and LLM. AI creativity is not what the Ai-Da project ended up testing [7]: 8]. Rather, creativity is a concept that remained vague to the point that audience can mistakenly believe in the robot’s almost-human powers.

3 If not human, why human shaped? Ethical implications of anthropomorphism

Ai-Da’s appearance is an essential trait that distinguishes it from the many previous experiments with robot artists. Ai-Da’s ability to draw and, more recently, to paint is not at all new. It was with the advent of cybernetics in the mid-1950s that the first attempts at robotic art were made [37, 55]. Since then, many artists have experimented with a diverse array of robotic arms, systems, and moving sculptures to test the limits of machine creativity, a question that has been renewed since the latest achievements in AI technologies. Indeed, Ai-Da’s claim to fame is to be “the world’s first ultra-realistic robot artist” because what is unprecedented are the robot’s looks, its humanoid appearance, and not the robot’s ability to generate literary and visual artworks.

Ai-Da’s form of embodiment is thus a distinctive trait of this project that requires serious investigation as it deeply affects the way people receive and make sense of it [28]. The choice to give Ai-Da a human appearance is legitimate but also deliberate because, as proven by the long history of non-humanoid robot artists, anthropomorphism is not necessary at all for the creation of artworks. However, this choice is far from inconsequential. Ai-Da, due to its realistic appearance and its participation in public events, exhibitions, interviews, press conferences, constantly offers a performance that seems to contradict the project’s official statements about the robot’s lack of consciousness. This has important repercussions as it involuntarily invites people to overestimate the state of AI technologies. Moreover, Ai-Da’s female appearance and its political advocacy mean that the project ascribes people’s personal struggles and stories of discrimination to an inanimate object, which simply mimics humanness.

In order to understand the cognitive and ethical implications of anthropomorphism, the three following sections analyse what function a human-like appearance, common in social robots, serves in the Ai-Da project, where no real interaction with the audience is planned; what effect the robot’s womanly appearance and expressions of gender solidarity has; how Ai-Da’s anthropomorphism has been key for the robot’s participation in the debate on AI regulation, not as a prop for discussion, but as a political subject.

3.1 Ai-Da, a social robot without the social

It is immediately clear that a considerable effort has gone into Ai-Da’s appearance. According to its official website, this is because the project aims to offer a different model to disembodied AI systems such as Alexa and Siri. Ai-Da has an ultra-realistic face and individually punched hair, often styled in different ways. It is dressed in human clothes, sometimes a stained artist’s smock, and wears accessories like scarves and necklaces, and even has its own personal fashion designer, Zoe Corsellis, in charge of creating its ‘retro-futurist’ look. Differently from other modern androids, only Ai-Da’s face and upper bust are realistic, while the rest of the structure, although roughly human-shaped, is made to expose its mechanic components and materials. However, what is usually visible are only Ai-Da’s robotic arms, hands, and feet, peaking from the clothes and contrasting with its realistic face.

These visual cues constantly remind the viewer of Ai-Da’s mechanic nature, as the illusion is not complete like in the case of other existing anthropomorphic robots. However, Ai-Da’s human-like appearance and ability to interact with people during interviews and press conferences fully place it withing the category of social robots, built to “interact with humans and each other in a socially acceptable fashion, conveying intention in a human-perceptible way” [15]: 217]. Anthropomorphism, intended as human-like appearance and behaviour, has been proven to be quite beneficial in the case of social robots [18], as this feature helps humans in interacting with them with greater ease. This is because a robot’s appearance influences how people perceive it [5], since “robots with humanlike design cues can elicit social responses from humans which in turn can have a positive impact on acceptance” [22]: 201]. Furthermore, AI robot faces that are “feminine (versus masculine) and humanlike (versus machinelike) have been shown to be judged as warmer and to produce relatively higher levels of comfort, resulting in positive evaluations and a greater desire for engagement” [77]: 305].

The risk that an anthropomorphic social robot could “successfully fool a person into thinking it is intelligent effectively through social interaction” [18]: 188], it is often assumed in order for the robot to successfully perform its task, be it taking care of a patient [58] or helping a customer with a purchase [49]. The same trade-off, however, does not apply to the Ai-Da project, as the robot artist is not intended to interact with its audience. This lack of interactivity is quite unusual in robot art, as projects of this sort often aim to specifically explore human–machine collaboration [26, 64]. Instead, as illustrated in the previous sections, only Ai-Da’s drawings are created in real time in front of an audience, who, nonetheless, is expected to enjoy the performance from a distance without interacting with the robot. As already mentioned, Ai-Da’s paintings, instead, require further intervention and manipulation, and the poems are performed but not composed in front of an audience. Only during interviews and press conferences Ai-Da interacts with people although, even in these cases, the communication is heavily planned and orchestrated, since most of the times the robot only answers to previously submitted questions or shares pre-recorded messages [14]: 2].

Ai-Da therefore acquires human traits by way of interacting with people. This sort of projection, which in turn leads to overestimation of its performance,Footnote 21 is only possible because of the robot’s anthropomorphism. In his analysis of AI agent design from the perspective of visual arts, Simon Penny states that what is “construed to be the ‘knowledge of the robot’ is in fact located in the cultural environment, is projected upon the robot by the viewer and is in no way contained in the robot” [54]: 403]. Ai-Da’s knowledge is profoundly human not simply because of the nature of the datasets used to train the model powering the robot. Instead, as illustrated in the previous sections, Ai-Da is presented as naturally attracted to topics and artists dealing with human sufferings and emotions, and social issues. Moreover, such ‘knowledge’ is conveyed by a hyper-realistic gynoid, something that, as widely studied in relation to social robots, makes people more open to trust and engagement, but also to misinterpretations.

Therefore, whenever Ai-Da is made to utter sentences like “As a humanoid machine I do not have consciousness and I am very different to humans” [82], the whole disclaimer sounds a bit disingenuous, since the robot’s design and presentation seem to undermine—or even contradict—this statement. Additionally, Ai-Da accompanies such declarations about its limited abilities with remarks that point in the opposite direction. For instance, in an interview with the magazine Dazed, from which the above statement was taken, the robot explained: “I find the oblique stance that I inhabit rather fun. As Ai-Da I have a persona that is unique to me and I enjoy that” [82]. If one is made to believe this, Ai-Da, while not endowed with consciousness, can feel amusement, and appreciate ‘her’ uniqueness. In linguistic terms, this statement functions as a performative act, as it is not what Ai-Da says that matters, but the act of talking about ‘herself.’ The very utterance seems to point to the very opposite, that Ai-Da has consciousness and self-awareness: it speaks, therefore it is.

This could not be accomplished without Ai-Da having a human appearance. Imagine a robotic banana uttering the same self-reflective analysis: it would be highly amusing, but the sense of estrangement would prevent the audience from projecting their humanness onto the robot. Interestingly, it is Margaret Boden, the already mentioned cognitivist often quoted within the Ai-Da project, who connects AI anthropomorphism and perception of creativity. Boden identifies four questions that need to be answered in order to decide if computers can be creative. She responds positively to the first three, but leaves the fourth—“can computer be truly creative?” [7]: 299]—unanswered. This is because the question has nothing to do with computational or cognitive issues, but with complex moral and political convictions about the essence of creativity, and this, according to Boden, differs from person to person. Such highly subjective response can be swayed by AI physical appearance, since “if our futuristic computers were encased in fur, given attractive voices, and made to look like teddy-bears, we might be more morally accepting of them than we otherwise would be. If they were made of organic materials […] our moral responses might be even more tolerant” [7]: 299].

Ai-Da’s anthropomorphic appearance, then, does not allow the robot to relate more effectively to people, as interactivity is only a minimal part of the project. Instead, it supports Ai-Da’s performance of creativity intended as an attribute of personhood.

3.2 Gender performance and synthetic sisterhood

If we want to effectively capture the contradiction between the message of female empowerment that the Ai-Da project wants to spread and their decision to give the robot a female appearance, hence building yet another gynoid that embodies some trite stereotypes, we should look no further than the robot’s name.

Ai-Da’s name is meant as a tribute to Ada Lovelace, regarded to be the first computer programmer in history and, in recent years, the patron saint of women working in tech. The name and the face of the Countess of Lovelace, and her pivotal role in the development, together with Charles Babbage, of the Analytical Engine, have recently become extremely popular, featuring in comic novels [6, 51], children’s books,Footnote 22 and all sorts of merchandise.Footnote 23 The Ai-Da project, in naming its robot after Lovelace, capitalizes on the vast curiosity and admiration that the mathematician has sparked in recent years. This is, of course, good news for the visibility of women in computer science, as popularization does not diminish in any way Lovelace’s genius. However, a tribute to Lovelace, in this day and age, is more a matter of clever self-branding than an act of feminist historiography.

Moreover, Ai-Da’s name hides in plain sight a second tribute to its creator, Aidan Meller, since it cannot possibly be a coincidence that the name of the gallerist and that of its ‘creature’ are basically the same. This choice immediately speaks to the gender issues underpinning Ai-Da and many other female-looking robots. The name aims to please a feminist audience who reveres Ada Lovelace as a pioneer for women in tech, but it also establishes Aidan Meller as a modern Pygmalion, therefore pushing once again the narrative of the male scientist who creates a robot-wife for himself [13]. As Robertson remarks, “How robot-makers gender their humanoids is a tangible manifestation of their tacit understanding of femininity in relation to masculinity, and vice versa” [62]: 4].

However, because Ai-Da is a robot artist and not a service one, its gendered appearance is framed within the project as a celebration of women’s creativity, autonomy, and agency, rather than something perpetuating old cliches. For instance, in an interview with Dan Fox for the online magazine Frieze, Ai-Da, when asked why it “was given a female gender,” replied: “I’m glad to be added to the number of female artists that get recognized” [24]. To be sure, this is not a tribute but a form of appropriation. By making Ai-Da part of the struggle for women’s visibility in tech and in the art world, the project misappropriates feminist issues and, once again, ends up reinforcing the misconception around the robot’s cognitive skills. Ai-Da’s self-professed identity as a “female artist” is supposed to show self-awareness in the form of gender solidarity toward its fellow women. Haraway’s notion of the cyborg [30], cited on the project’s website as a source of inspiration, is stretched to signify something that Haraway never intended, meaning that womanhood is an attribute completely independent from embodiment.

Evidently, Ai-Da does not belong to the category of women artists because it is not a woman, does not possess a gender, nor a lived, embodied experience. Fox saw this macroscopic contradiction and asked Ai-Da how it is possible to be both a machine and a woman, to which the robot cryptically answered that “[t]his is exactly the kind of question I hope will be discussed” [24]. This is a common trend adopted in many of the official communications and statements for the Ai-Da project: leaving controversial aspects and delicate issues open to people’s interpretation. While the intention is to stimulate public debate, the lack of transparency on the matter to be discuss—like in the case of gender and embodiment in an AI robot—might lead to confusion and misunderstanding.

Moreover, we need to remember that Ai-Da’s feminist self, as well as its interest in giving voice to women’s experience through its artworks, come in fact from Lucy Seal’s contribution to the project, as the robot’s ‘tastes and inclinations’ mirror those of its human, female creator [59]. This, coupled with the fact that the work of Suzy Emery, the artist in charge of creating the paintings publicly attributed to Ai-Da, is kept almost a secret, shows how systematic the act of appropriating women’s work and identities is in the project.

Ai-Da’s statements showing solidarity to women artists patently clashes with these practices. Ai-Da is not presented as an artwork but as an artist, therefore any form of criticism against the structural issues of power that AI systems risk reinforcing [33]: 932]—gender stereotyping being one of them—is mainly expressed via the robot’s statements, rather than through its artistic performance. While Ai-Da powerfully spells out its concerns regarding women’s visibility and gender discrimination during interviews and public events, the Ai-Da project itself falls short of challenging the effect of sexual and gender stereotypes in AI robots.

This has led journalists and commentators to only consider the robot’s statements and not the context in which they are uttered, and to see Ai-Da’s as a spokesperson for women. For instance, Sarah Roberts, writing for the online blog Agora Digital Art, a project with the expressed goal to advocate for the work of women digital artists to be recognized, writes against the “sexual objectification” [61] of Ai-Da, described as a “Brigitte Bardot in a brunette wig” by The Times [36] and as a “sexy fembot” by ArtNET [59]. The accusation of sexism against Ai-Da further spreads misconceptions about its ontological status, as the robot appears to endure the same objectification imposed onto women. Surely, Ai-Da can work as a Rorschach test for people’s sexist bias, and indeed, users’ abusive behaviours towards fembots or female-sounding virtual assistance have been studied for what they say about our society. However, in the case of Ai-Da, its supposed activism adds to the general confusion and misunderstanding around AI technologies, since gender issues, much like the already mentioned ethical preoccupations, misguide people about the robot’s performance.

3.3 Do robots have politics? Enacting political subjectivity

As a final step in the analysis of how and why Ai-Da’s anthropomorphic appearance invites people to overestimate the robot’s powers, this section explores Ai-Da’s presence in the public arena. Since its creation in 2019, Ai-Da and Aidan Meller have been invited to many high-profile events, allowing the project to position itself at the forefront of the debate on AI risks and regulations, if not within the academic sphere, at least in the public discourse.

Aside from the AI for Good Summit mentioned at the beginning of this article, other notable events featuring Ai-Da occurred in October 2022, when Meller and its robot were invited to give evidence in front of the Communications and Digital Committee at the House of Lords [35], and in October 2021, when Ai-Da, invited to take part in an art exhibition in Egypt, was stopped at customs for ten days because it was suspected by the Egyptian authorities to be an AI-powered espionage tool [40]. This required the intervention of the British ambassador in Egypt, and the whole incident drew international media attention and sparked a debate on the topic of technological surveillance.

Ai-Da’s participation in these high-profile events and the curiosity that surrounded them had two important consequences. First, a project lacking full transparency about the AI systems and technologies employed—something that we have amply demonstrated so far—managed to orient the public debate on these topics. Second, similarly to what has been described in relation to gender issues, Ai-Da’s performance shifted the attention from the real risks posed by AI technologies to the guessing game about the robot’s real and perceived abilities.

This is encapsulated in the incident with the Egyptian authorities. The whole situation was framed, both in the media and by Meller, as a hostage crisis, with the robot appearing to be a victim of injustice, wrongfully detained while its anxious creator awaited its release. Ai-Da’s personification in the accounts of the incidents was constant, as the robot was described to be suspected to be ‘a spy’, ‘detained’ rather than held up at customs. Meller even told The Guardian to have refused to “gouge her eyes out” [40] when asked about Ai-Da’s cameras by the border police authorities.

As a comment on this incident, Ai-Da created a sculpture titled Eyes Sawn Shut, which is the same title of the already mentioned poem celebrating Dante and was also inspired by the same passage from Purgatory 13 [66]. The sculpture represents Ai-Da with her eyelids sewn together with iron wire and is meant to be a polemical statement on the incident that occurred in Egypt. This interpretation was confirmed by Meller himself before the Dantean reading at the Ashmolean, which occurred soon after Egypt. He explained that the sculpture Eyes Sawn Shut was a critical stance against excessive apprehension towards AI technologies: “She’s a visual artist, but actually she critiques some comments about technology generally, and as you’re possibly aware, she was on the front page of The Guardian a couple of weeks ago. She was detained in Egypt because of the worry of technology; she was considered a spy”.Footnote 24

The report of the incident and Ai-Da’s artistic response to it—a response that, it needs to be stressed, was planned by the members of the team, and only partially executed by the robot—aimed at granting Ai-Da political subjectivity.Footnote 25 The robot is framed as if it underwent painful human experiences—it was detained and discriminated against—and then captured this injustice in its art. While intended as a pointy political statement, this choice ends up appropriating what happens daily to many people and artists—Ai Weiwei, Pussy Riot, Shahidul Alam, just to name a few—and of their political use of art, like in the case of women discussed in the previous section. Moreover, in this assessment of the incident, Egyptian authorities were made to seem as if they were unreasonably preoccupied by the possibility that an AI robot could be used for espionage, a scenario which is instead far from unrealistic [52, 53].

A similar confounding approach was displayed during Ai-Da’s appearance in front of the House of Lords, when Meller, on the one hand, overstated the robot’s powers thus inducing anxiety, while on the other he downplayed AI real risks, such as unemployment and copyright issues. For instance, he described himself having conversations with Ai-Da about the robot’s ideas for future projects, hence framing what is, in fact, prompt engineering as a sort of exchange of opinions [14]: 7–8]. More importantly, Meller made light of the Chair’s statement about Ai-Da not being “a witness in its own right” [14]: 2]. Baroness Stowell of Beeston pointed out that the robot “does not occupy the same status as a human,” which meant that its creators were “ultimately responsible for the statements it makes” [14]: 2]. In a rare occasion of someone refusing to use the pronoun ‘she’ when talking about Ai-Da—a practice adopted in this paper—Baroness Stowell denied the robot political subjectivity and framed Ai-Da as a tool, rather than a subject. In doing so, she joked that she hoped not to have offended Ai-Da, a comment that Meller took as proof of the weakness of the caveat about Ai-Da’s persona, and of the Chair’s puzzlement in front of the robot, rather than simply a witticism.

The answers provided by Ai-Da’s creator seemed to incite bewilderment and apprehension in his audience, as he repeatedly described the robot’s abilities as “very confusing,” “very upsetting for humans,” “mind-boggling,” “extremely confusing, threatening and worrying.” This finally prompted Baroness Featherstone to declare that she was “partly terrified” and, as someone who was not an expert in the field, she felt that Ai-Da “feeds into all the films about AI taking over the world” [14]: 8].

During the evidence hearing, Meller’s contradictory account reproposed the project’s poor attitude towards AI public literacy: due to the lack of transparency about the technical aspects, the claims about Ai-Da’s non-threatening nature and its lack of consciousness rarely reach their audience. Instead, people are left marvelling at the robot’s power without fully understand what they are witnessing, which lead to fear and suspicion. Given that the robot’s presence at the House of Lords was meant to shed light on the state of AI technologies in the creative sector, and not to entertain an audience, the omissions and paraphrasing of Ai-Da’s actual abilities were even more dangerous for the implications that such disinformation might have.

4 Conclusions

This analysis has demonstrated how artistic creativity and anthropomorphism are conceptualised and communicated in the Ai-Da project, and how these choices can lead people to overestimate and misunderstand the functioning and affordances of the AI technologies powering this robot artist. Such misunderstanding can be observed in the way media have reported on Ai-Da, and in the reactions of the members of the House of Lords analysed in the article. Moreover, the project’s commitment to using art to foster public awareness of AI and its social and ethical implications makes such confusion and lack of transparency even more troublesome. The project’s mission, although often contradicted in the way Ai-Da and its artworks are presented and publicized, allows the project to position itself at the forefront of the conversation on AI regulation and implementation in the creative sector and beyond.

While the article focuses on the Ai-Da project specifically and offers a detailed account of the dangerous implications of the lack of transparency and of AI-hype within this project, its conclusions are relevant beyond this case study. Considerable attention has recently been devoted to misrepresentations of AI in the media, both in the news and, especially, in films and TV series [13], as they affect people’s understanding and orient the political agenda—the most recent instance being US President Joe Biden becoming upset about AI going rogue after watching the latest instalment in the Mission Impossible franchise [70]. On the other hand, AI-generated art is often considered, and rightly so, to be an effective tool for public literacy, challenging the dominant discourse and offering new approaches to human–machine interaction that reject existing structures of dominance and extractive approaches to AI.

This division between the general commitment to transparency and public utility of AI-powered art, and the deceptive role of news and entertainment is challenged in the Ai-Da project, which employs tools and approaches common to robot art but often adopts communication strategies more in line with the AI hype pushed by mainstream media. This anomaly, rather than being ignored as an isolated case, deserves to be investigated for what it says about the role of culture in shaping our reception of AI technologies. The high-brow art vocabulary and glamorous environment associated with the Ai-Da project lend it an authoritative voice, which in turn risks prompting people to trust the vision put forward by the project unconditionally. The goal of this article was to unpack the mechanisms that can induce AI moral panic, demonstrating that it's not just extreme scenarios like killer robots or mind-reading computers that can trigger such reactions. All it takes is a misjudged fembot that quotes Haraway and likes Yoko Ono.