In its strictest sense, the term ‘anthropomorphism’ refers to a type of bias or an error that entails the tendency to attribute human-like characteristics, such as intuition, emotions, and appearance, to objects or animals (Dacey 2017). Recent literature describes different approaches towards anthropomorphism, which may range from: (1) perceiving AIEMs as mere tools to (2) embracing them as humanlike agents, and (3) a third position conciliating between the previous extremes,focusing on AIEMs as cognitive systems, jointly formed in the business-society nexus (2019). In the following, each approach will be discussed in further detail.
AIEMs as tools
This approach perceives AIEMs as tools or instruments created to fulfill human purposes (2019). When a robot is perceived as a tool, it is usually not designed to adapt to changes in the world. It has the purpose of performing either limited or specialized functions (Hauser et al. 2021). From this perspective, recent research underlines anthropomorphism as a crucial factor in people’s willingness to adopt, use, and form positive or negative attitudes towards AIEMs (Li and Suh 2021). It has been shown that consumers prefer robotic systems featuring humanlike characteristics and feelings, such as humor or empathy, over systems with equal capacities but a lack of human likeness(Rzepka and Berger 2018). AIEMs equipped with human characteristics increase trust, reduce stress, foster likeability, and thus, increase their adoption and use(Paiva et al. 2017). Moreover, if such robotic systems make mistakes, consumers are more likely to forgive them than non-anthropomorphized systems(Yam et al. 2021). However, anthropomorphism can also lead to negative attitudes and a refusal of AIEMs (Rzepka and Berger 2018; Kim et al. 2019a; Gursoy et al. 2019). High anthropomorphic appearance can be perceived as a threat to human identity. The robot appears as a source of danger (Lu et al. 2019). Consequently, the instrumental approach toward anthropomorphism strives to overcome such challenges by augmenting AIEMs utility (Lu et al. 2019). This instrumental focus on the question of how to best fulfill the desired purpose of AIEMs has been criticized for not sufficiently accounting for the broader societal embedding: scientists “know that the robot is just a tool, but nevertheless when we interact with the robot our psychology (the psychology of users) leads us to perceive the robot as a kind of person” (Coeckelbergh 2021, p. 3). Consequently, treating AIEMs as instrumental tools overlooks the unintended outcomes that naturally evolve with human–machine interactions and the societal embeddedness of humanlike AIEMs.
AIEM as humanlike agents
The second approach towards anthropomorphism is characterized by the objective of producing a kind of human replica (Giger et al. 2019, p. 112). This involves embracing “robots as quasi-persons and “others,” which is to say that social robots should be part of the network of humans and nonhumans (Coeckelbergh 2021). AIEMs are viewed as humanlike agents that may adapt to social situations independently. Therefore, while in the first approach, humanization is considered a means to best fulfill the AIEMs specific design purpose, in the second case, the replication of human interaction is at the center of anthropomorphization. This approach entails a much broader understanding of AIEMs that goes far beyond the previously described instrumental perspective of AIEM as a tool or thing in contrast to humans. Quite the contrary, perceiving AIEMs as humanlike agents stretches the boundaries between the human and nonhuman, deconstructing the conception of humanness in light of post- and transhuman futures (Nath and Manna 2021; Baelo-Allué and Calvo-Pascual 2021; Sorgner 2022; Hofkirchner and Kreowski 2021). Thus, conceptions of posthumanism and transhumanism provide a wider perspective on human-technology evolution, where anthropomorphization follows the idea to make AIEMs increasingly humanlike, including them as social actors in all societal spheres (Hofkirchner and Kreowski 2021). However, conceptions of AIEMs as humanlike agents often overlook the fact that human designers remain decisive, raising doubt about whether robots may ultimately become others or nonhumans (Nath and Manna 2021). Therefore, while the first view tends to overlook the fact that there is a relation between humans and technology, the second is limited in the understanding that AIEMs may never become completely external beings because of their origin.
AIEM as joint cognitive systems
Going beyond the previously depicted approaches, a third perspective strives to reconcile the previous extremes. From this vein, AIEMs are perceived as a joint cognitive system, treating them as part of the social nexus. Different from the views introduced before, the focus is set on the relation between robots and humans and their social embedding relating to the notion of AI and society. The third approach allows for a critical view of anthropomorphization. Coeckelbergh (2021) recently outlined five elements characterizing this approach towards anthropomorphization: (a) The first characteristic regards the fact that robots are designed by humans. They will never be totally other since they contribute to shaping our goals which makes them already part of our social sphere with no need of bringing them into it. (b) The second characteristic regards robots' linguistic and social construction. Indeed, “humans do not only materially create robots but also (during development, use, and interaction) “construct” them by means of language and in social relations, which must be presupposed when we think about these robots and interact with them” (Coeckelbergh 2021, p. 6). As Giger et al. (2019) explain, when a robot is anthropomorphized, physical characteristics, like gender and race, are attributed to it. In this way, the meaning of the AIET is co-shaped (2021), underlining its profound embedding in the socio-cultural environment: “By giving it a particular name, users may also tap into an entire culture of naming and gendering” (Coeckelbergh 2021, p. 7). (c) The third characteristic regards another aspect of relationality: AIETs embeddedness in cultural wholes. Indeed, they are related to our “social practices and systems of meaning,” with the crucial point being that robots actually contribute to our meaning-making, and this is the case not only for social robots (Coeckelbergh 2021). (d) The fourth characteristic regards the lack of hermeneutic control. In fact, the meaning-making process is not always under complete control. Indeed, there is some unintended meaning generation when humans engage with other humans. Therefore, interactions with robots may also lead to unintended meaning generation. (d) The fifth characteristic—power—relates to social robots interacting with us and generating meaning (Coeckelbergh 2021). The latter has a social and political effect. This is because behind each robot lies a company. Even if it is not always the case, there could be an underlying manipulation or exploitation in the anthropomorphization (Hauser et al. 2021). This paper particularly builds on the corporate power characteristic to discuss the anthropomorphization of robots in social relations.
Robots as powerful instruments-in-relation: corporate marketing and the notion of greenwashing
This article addresses the relational interaction between human actors and AI in light of powerful commercial interests underlying the staging of AIEM in advertisements, marketing, and corporate communication (Tollon and Naidoo 2021). Thus, particularly building on the power characteristic of Coeckelbergh’s (2021) anthropomorphization conception. What remains hidden behind the robot’s mask or performance as other, or friend, are the actual capacities of the robot and the broader corporate power relations (Coeckelbergh 2021; Parviainen and Coeckelbergh 2020). Thus, behind the marketing veil, corporate interests tap into people’s psychological biases when presenting AIEM machines in advertisements and social media campaigns. Asymmetry of information is at the core of this phenomenon since only one party—the corporation—has the power of complete awareness of the state of reality. Connelly et al. (2011, p. 42) suggest two crucial asymmetry dimensions in this regard “information about quality and information about intent.” The first dimension relates to information asymmetry because an observer lacks full awareness of the other party, as in the case of corporate stakeholders being unaware of modern robots' actual capabilities (Connelly et al. 2011). The second dimension of information asymmetry deals with an observer’s concern about the other party’s behavioral intentions, which, in the case of robots, regards companies’ use of anthropomorphization as means that exploits people’s psychological biases (Coeckelbergh 2021).
Corporations may intentionally or unintentionally create unrealistic perceptions of robotic capabilities. This may involve designing and presenting robots that closely resemble humans and create the impression of a “friend or other” equipped with artificial general intelligence (AGI). “[A]s AI technology becomes more sophisticated, this illusion of intelligence will become increasingly convincing” (Shanahan 2015). Although the latest generation of robots may feature some form of AI, they still present mere machines or tools “that can perform specific, often highly limited or specialized, functions” (Murphy 2019, p. 20). As Shanahan (2015) states, “none of this technology comes anywhere near human-level intelligence, and it is unlikely to approach it anytime soon.” On the other end of the spectrum, one may find AIEMs far more advanced than the benign andromorphic mask suggests. AIEMs may be built for dual-use security purposes or directly fall into the category of “killer robots,” aka lethal autonomous weapon systems (Davenport et al. 2020; Lauwaert 2021; Pitt et al. 2021). Although such AIEMs may generally target governmental customers, their harmless anthropomorphic design may nevertheless be promoted via corporate advertisement and social media campaigns, creating the impression of a friendly other. However, behind the anthropomorphic mask, they can be equipped with capabilities far more advanced than what the corporate marketing campaign may suggest (Hauser et al. 2021; Parviainen and Coeckelbergh 2020; Seele 2021; Seele and Schultz 2022). Consequently, from a corporate point of view, the anthropomorphic presentation of robots in commercials may be advantageous to create awareness for products and attract potential investors. However, this practice may lead to misconceptions, as observers come to false conclusions about the robots’ capabilities.
This corporate practice closely resembles what is known as greenwashing in the business-society nexus: “Greenwashing is a special case of ‘merely symbolic’ in which firms deliberately manipulate their communications and symbolic practices so as to build a ceremonial façade” (Bowen 2014, p. 33). Thus, analogous to greenwashing, the underlying asymmetry of information allows the robot company to exploit their knowledge advantage about their products, let observers believe in unrealistic robot capacities, or distract observers from the actual capabilities that the robot can perform (Becker-Olsen and Potucek 2013; Berrone et al. 2017; Obradovich et al. 2019; Parviainen and Coeckelbergh 2020; Seele and Schultz 2022). Critical observers have termed this corporate strategy as “humanwashing of robots,” which is “meant to create the surface illusion of likable or harmless humanlike behavior of intelligent machines to charm away adverse or harmful characteristics or perceptions” (Seele 2021).