Robots and Art pp 177-189 | Cite as

The Potential of Otherness in Robotic Art

Chapter
Part of the Cognitive Science and Technology book series (CSAT)

Abstract

This chapter compares and contrasts the creation of humanoid robots with that of non-humanoid robots, identifying assumptions about communication that underlie the designs and employing a range of communication theories to analyse people’s interactions with the robots. While robots created in science and technology laboratories to communicate with humans are most often at least somewhat humanlike in form, those created as part of interactive art installations take a variety of forms. The creation of humanoid robots can be linked with ideas about communication that valorise commonality above all else, whereas robotic artworks illustrate the potential of otherness in interactions between humans and non-humanoid robots.

Introduction

Although not all robots are created with the aim of communicating with humans in mind, an increasing number are now being designed to care for, work with, and entertain people in a range of different places, including homes, working environments and public spaces such as art galleries. By analysing people’s interactions with robots from the perspective of various branches of communication theory, alongside a consideration of the aims articulated by creators for their robots, it is possible to identify the presence of what might loosely be termed scientific and artistic conceptions of what it means to communicate, what being social constitutes and, therefore, how best to build a robot with which people want to interact. These scientific and artistic conceptions are not clear cut, or completely separable from each other, and should not be regarded as totally polarised. In spite of the imprecise nature of these categories, they are still helpful in explaining the wide range of interactive robot designs that have arisen across scientific, technological and artistic contexts.

Discussed below are a number of robots, ranging in form from the very humanlike to the overtly other. The decisions made in creating these robots, as well as the interactions that people have with them, are analysed in relation to ideas about communication categorised using the framework developed by Robert T. Craig is his appraisal of “Communication Theory as Field” [8]. In exploring the presence of broadly scientific and artistic conceptions of communication, my focus is to identify the potential of otherness in communication, a potential that is most clearly demonstrated by the non-humanoid robots that appear within art installations.

Creating Humanoid Robots

While a few designers follow a minimalist path when creating communicative robots [22], the majority of roboticists building robots designed to interact with people, either in research laboratories or as commercial products, argue that their robots need to be at least somewhat humanlike in form in order to communicate effectively [9]. Indeed, the humanoid robot has been described as “the Grail” of robotics and the pursuit of this goal leads some people to create robots that appear almost indistinguishable from humans [23]. This is particularly well illustrated by the work of David Hanson in the United States and Hiroshi Ishiguro in Japan. Both Hanson and Ishiguro have chosen to model a number of their robots on existing people, most famously the heads of Albert Einstein and Phillip K. Dick in the case of Hanson, whereas Ishiguro has created a Geminoid which is his own double, as well as one resembling his daughter. Moving away from the idea of ‘recreating’ a person, Hanson designed Jules for the Bristol Robotics Laboratory, a robotic head that was not based on a particular human individual. When creating this type of robot, the need to attain as close to humanlike appearance and behaviour, in particular through use of facial expressions and speech, is thought to be key in supporting people’s interactions with the robot. As might be expected from their appearance and behaviour, these humanoid robots are designed with the aim of making human interactions with them as close to human-human interactions as possible, and therefore easy to understand based on one’s existing experience of communicating with others.

At the core of these designs is the assumption that making the robot look very humanlike improves its ability to fit into existing social structures and situations with which humans are already familiar. These robots are therefore framed quite clearly in terms of sociocultural theories for which communication is about the production, and the reproduction, of shared social understandings of the world and people’s positioning within that world [5]. In addition, their ability to persuade or influence those with whom they are communicating, ideas emphasised within the sociopsychological tradition of communication theory, is judged to be vital [8]. These robots need to encourage people to think of them either as generically human, or, in the case of Hanson’s Einstein robot or Ishiguro’s double, as representing a particular person in a believable way. The aim of the roboticist when creating a humanoid robot is to encourage people to treat the robot as they would another person and to draw them into communication that operates exactly like an exchange with a human.

The communication of these robots often involves the well-modulated use of a synthesised or recorded human voice, as well as the ability to show emotion through facial expressions. Although Ishiguro’s robot double is sometimes teleoperated, allowing him to talk to people remotely ‘in person’, and Jules’ speech often seems to be heavily scripted, the aim of this type of design is to create “robots that act and react virtually indistinguishably from their human counterparts” [17]. Hanson has stated that his long-term goal is to design robots that can evolve “into socially intelligent beings, capable of love and earning a place in the extended human family” [17]. Effective use of human spoken language is clearly a part of this process and, from the perspective of the cybernetic tradition of communication theory, success of this is most often judged in relation to accuracy in information transmission or exchange [8]. Given the importance of precision within a cybernetic process, this idea is closely linked with the semiotic tradition since, for human communication at least, the correct use of language is important in enabling the encoding and decoding of information as messages are relayed [8]. Designing robots that can speak clearly, and whose speech is supported by the use of appropriate humanlike facial expressions, is driven in part by the desire to reduce any potential for misunderstanding, which might be introduced by an unfamiliar communication style on the part of the robot. To use the cybernetic tradition’s term, the aim is to reduce any ‘noise’ that might distract from a process of accurate information transmission.

The creation of humanoid robots as interactive partners clearly involves a great deal of artistic skill in perfecting their appearance and behaviour. However, the particular understandings of communication theory that shape the creation of humanoid robots are based on the assumption that communication success is founded on what communicators already have in common, and seeks to develop that commonality further. Whether communication success is measured in terms of the accurate transmission of information, the ability to maintain a persuasive influence, or the development and maintenance of shared social understandings in support of a cohesive culture, communication is framed as a process that can be perfected. This type of process has a correct outcome, against which the potential for ambiguity (supporting various interpretations) and misunderstanding is an undesirable risk that should be eliminated. When thought of in this rather idealised way communication can be said to be broadly ‘scientific’ in its aims.

Issues with the Pursuit of Commonality

The traditions of communication theory employed in the analysis above—sociocultural, sociopsychological, cybernetic and semiotic—complement one another in reinforcing the idea of the humanoid robot as like another human in a particular context. The robot’s machine otherness is understood to be something that has the potential to disrupt successful interactions with humans, and is therefore disguised as far as is possible. However, questions relating to how well striving to create a robot that closely resembles a human actually works do arise: the idea that encounters with very humanlike robots make some people uncomfortable being formalised in Masahiro Mori’s concept of the “uncanny valley” [24]. Mori predicts that the familiarity people sense, and therefore their comfort with a robot, increases as its appearance becomes more humanlike. However, there is a point at which, quite suddenly, the robot is perceived as zombielike as opposed to humanlike. As attraction turns to horror, the sense of the robot as familiar and friendly drops away into a valley that Mori names “uncanny” thus highlighting its relation to Freud’s use of this term to describe “that class of the terrifying which leads back to something long known to us, once very familiar” [14].

Some roboticists do not regard the uncanny valley effect as a long-term problem for humanoid robotics. Hanson, for example, argues that it is the aesthetic impact of the robot that is most important in shaping people’s reactions. He therefore suggests that “any level of realism can be socially engaging if one designs the aesthetic well”, using the term “path of engagement (POE)” to describe this “bridge of good aesthetic” [16]. This idea would seem to emphasise the art in creating realistic humanlike robots. However, as this chapter turns to consider human-robot interactions in art installations it becomes clear that some artists question whether people are discouraged to interact with things they perceive as uncanny, or simply as unfamiliar. In particular, artists may choose to challenge the boundaries of what is understood to be possible in communication by designing robots in a range of forms, often not pursuing anything resembling human form and therefore exploring the potential for very different paths of engagement between humans and overtly non-humanoid others.

In addition, a number of communications scholars have raised the question of whether assuming that success in communication is based on commonality, with the aim of increasing this commonality further, has an ethically desirable result [28, 29]. While Amit Pinchevski frames his argument as a critique against the “elimination of difference” [29], seeing this as a violence against the alterity of the other, John Durham Peters condemns perspectives on communication that valorise the “reduplication of the self” [28]. In human communication, ideas of reduplication and violence against the other through processes designed to eliminate difference are clearly undesirable, being linked with a general disrespect for others and their personal, cultural and social differences from the self. While worrying about violence against robotic others may seem a less important concern, the production of humanoid robots (as clear reduplications of the self at various levels) does reduce the possibility for people to come into contact with a variety of forms of robot, which might possess valuable new perceptual skills and motor abilities. It is therefore helpful that many of the robots designed and built by artists demonstrate the possibilities of non-humanoid form. These art installations push the boundaries of what is assumed possible in human communication by allowing people to encounter others whose alterity is overtly represented in their form and behaviour.

Other Faces in Robotic Art

Although the goals of a robotic art installation are often somewhat different from those for a robot created in a scientific or technological context, all robots designed to interact with humans must first attract peoples’ attention, and likely aim to keep this attention for some period of time. In the section above, the idea that humanoid form is important in this process has been highlighted. In scientific studies of social robotics the ability to attract attention, and show where one’s attention lies, is often used to justify the need for a robot to have eyes, whose gaze direction and movement can be recognised by humans in ways thought to encourage more meaningful interactions with the robot [2, 3, 4]. In art, Louis-Philippe Demers’ work, Area V5, named after the section of the visual cortex thought to be important in perceiving movement, takes the idea of meaningful gaze to a new level, by inviting visitors “to experiment and establish a non-verbal dialog” with a wall fitted with artificial skulls containing a hundred “disembodied gazing eyes” [10].

In contrast with the attempts to create a familiar humanlike gaze embedded within a realistically humanoid robotic body, as seen in Ishiguro’s Geminoid robots, Demers’ artwork is explicitly meant to invoke an uncanny sensation as the disembodied eyes move in pairs to track visitors to the installation. Area V5’s implementation is designed to convey the idea that the visitor has been seen by the eyes, and through this communication attract a level of reciprocal attention. Indeed, the installation appears to fall very effectively into the uncanny valley, while nonetheless encouraging visitors to develop a level of fascination with the artwork such that they play with the installation intent on provoking it to follow their movements [33]. Demers describes this work as “an artistic comment about scientific methodologies of approaching social robotics and the uncanny valley” [33]. Social roboticists and writers on the subject of social robotics often say that “a robot has to look friendly to be accepted” [33]. However, in the case of Area V5 there is a set of “dead skulls looking at you, but at the same time people play with this, they totally forget about the look” [33]. This installation shows that “to engage with the robot it doesn’t have to be necessarily of a human appearance or even a beautiful human” [33].

As I have already explained, some communication theories can be associated with reducing, and eventually eliminating, the differences between communicators [29]. In contrast, Emmanuel Levinas’ conception of communication places its emphasis on encounters between selves and others within which the recognition of, and retention of respect for, the alterity of the other is key. Levinas describes the encounter between self and other as “the face to face”, during which, while they are brought into close proximity, an irreducible distance remains between them [19]. Within this explanation, Levinas’ use of the terms proximity and distance are less about physical positioning and more about paying close attention to the other, while also acknowledging the continued presence of their specific differences. Communication in such a relation is therefore not about identifying elements of commonality and sameness; instead, the interaction between self and other is founded in recognition of the difference, or distance, between them.

Levinas himself suggested that only humans could reveal this type of face, denying animals or objects the ability to take part in this level of revelation and engagement. However, it seems worth revisiting the question of whether robots, in particular those with humanlike faces can reveal themselves in this way. Humanoid robots, such as those created by Hanson and Ishiguro, clearly present some level of humanlike face, although since this face has been designed with the very aim of promoting a sense of commonality and ease in communication, there is little chance for it to reveal otherness except perhaps in terms of the uncanny. Given people’s responses to Demers’ Area V5, designed to emphasise the uncanny nature of robotic eyes, it seems that these robots offer a greater sense of otherness, and also indicate that potentially only the eyes are needed to elicit this type of engagement in an encounter with a robotic other.

However, a closer examination of Levinas’ philosophy clarifies that the Levinasian face is not actually a physical human face at all. Instead, Levinas’ conception of a face encapsulates “the way in which the other presents” or reveals themselves [19]. Levinas suggests that “by concentrating on physical facial features”, one turns “towards the Other as toward an object”; instead, “[t]he best way of encountering the other is not even to notice the colour of his eyes” [20]. Elsewhere, he explains that “the whole body—a hand or curve of the shoulder—can express as the face” [19]. It therefore seems possible that overtly non-human others, even those without recognisable eyes, might also reveal Levinasian faces, in spite of the fact that Levinas himself didn’t extend his thinking to the non-human.

Scholars have made considerable inroads in arguing the case for the revelation of Levinasian faces by animals, drawing not only on their own experiences, but also on Levinas’ description of the behaviour of Bobby, the dog discussed in his essay “The name of the dog” [7, 11, 21, 34]. In addition, David Gunkel, considers whether machines can be, or might in the future be, regarded as Levinasian others in his book, The Machine Question [15]. From the perspective of this chapter, the broad description of what constitutes a face within Levinas’ philosophy supports a consideration of a wide range of robots as able to reveal faces in encounters with people, whether they express themselves through language, sounds, gestures or whole body movements. As the examples below illustrate, robots with no recognisable face in anything resembling human terms are nonetheless able to reveal aspects of a personality to their human visitors through their physical embodiment and behaviours.

Turning One’s Body to Express a ‘Face’

A number of robotic art installations promote a different idea of gaze by illustrating alternative understandings of what constitutes a face, and exploring the impact of whole body movement in the form of turning to face someone. One example of this is Petit Mal, an autonomous wheeled robot created by Simon Penny, appearing in public for the first time in 1995. Penny explains that his goal in designing Petit Mal was to create a robot that was “truly autonomous; which was nimble and had ‘charm’; that sensed and explored architectural space and that pursued and reacted to people” [25]. He wanted the robot to give “the impression of intelligence” through the production of “behaviour which was neither anthropomorphic nor zoomorphic, but which was unique to its physical and electronic nature” [25]. Penny clarifies that his aim was not to produce an artificial intelligence, but rather a robot that “gave the impression of being sentient” while also being of minimal complexity in terms of its mechanical parts, sensors and computer code [25].

While Penny was focused on the idea of “the robot as an actor in social space”, he was clearly not constrained by the assumption that this robot needed to be like a human in order to operate in existing human environments by producing familiar humanlike communication [25]. Instead, Petit Mal is able to ‘speak’ only through its movements, without using “textual, verbal or iconic signs” [26]. This understanding of the value of nonverbal signals, such as whole body movements, in communication is explored in Fernando Poyatos’ research into simultaneous translation. Poyatos argues that communication is best thought of as a “triple audiovisual reality”, which consists not only of “what we say”, but also “how we say it” and “how we move what we say” [30]. Petit Mal may not be able to ‘say’ anything to people directly in human language, but its whole body movements allow it to communicate using what Poyatos encapsulates with the term “kinesics” [30]. This robot is therefore designed to be overtly machinelike, but nonetheless able to behave such that it is read by people as a sentient and expressive individual. Interactions with Petit Mal give visitors to the installation the opportunity to experience an encounter with a strange robot, within which a new understanding of what it might mean to be social is presented.

The movement of Petit Mal and its bodily form, which includes what visitors are likely to recognise quite easily as a non-humanoid neck and head, helps people to know where to direct their communication in interactions with the robot. Importantly, the positioning of sensors on Petit Mal’s head, as well as the robot’s tendency to move in a particular direction, help to clarify that this robot has a front and a back, such that visitors can judge which way the robot is facing. When a person enters the installation space and approaches Petit Mal their presence is noted, causing the robot to move its whole body to face them. As Derrida argues is possible for animals, visitors feel that Petit Mal can “look at them and address them … from a wholly other origin”, and in testing the robot’s abilities people move from side to side to see it turn and follow their motion [11]. Any sense that this robot is threatening, which might arise because of the clarity and attentiveness of its gaze, is reduced by the calmness with which it moves around the space it occupies, together with the bobbing head and neck motion that these movements cause. Petit Mal reveals a gentle personality, and as a human approaches the robot it immediately backs away. This robot is situated as cautious and polite, because it seems respectful of people’s personal space (and also potentially as wishing to protect its own).

Although Penny describes the desire to attain “an ongoing conversation between system and user” as opposed to following a “stimulus and response model”, it is possible to identify a level of both of these processes in communication with Petit Mal [25]. Moments of turn taking are identifiable, in particular when visitors experiment with repeated movements (for example, stepping from side to side to see how well the robot maintains its orientation towards them) as they play with the robot and attempt to understand how it ‘sees’ them [27]. This would seem to involve experimentation with a given stimulus in the expectation of a particular response. However, the flowing movements of Petit Mal, along with its gentle bobbing and turning motion, give it a great deal of character and personality, and support a reading of human-robot interaction in this installation space as a dynamic system of communication that consists of overlapping messages, as opposed to following strict turn-taking rules at all times.

There are some similarities between Penny’s work, Petit Mal, and a more recent development consisting of two wheelchair-like robots, which interact together and also with people that enter their installation space. The Fish-Bird Project was conceived by Mari Velonaki, and was built in collaboration with roboticists at the Centre for Social Robotics in Sydney University. In contrast with Petit Mal, Fish and Bird are robots whose form is overtly based on that of a familiar item, a standard hospital wheelchair. A key difference between these robots and Petit Mal is therefore their lack of a defined head and neck. However, because they are chairs, their form nonetheless indicates which way they are facing, with the seat at the front of a well-defined back complete with handles to grasp and push the wheelchair along (although these robots will not allow people to push them with any ease). This form was chosen in part because it inherently “suggests the presence or absence of a character” [36]. Thus, although these robots were, as Petit Mal was, designed to be non-anthropomorphic and non-zoomorphic, the wheelchair form is understood to draw attention to the space a person might occupy.

While recognition of this space for a person may indeed have an impact on visitors to the installation, in general people have reported “that they were attracted to the robots not because of the way that they looked, but because of the way that they behaved” [35]. People’s first impressions of Fish and Bird are related to the ongoing communication that can be seen between the two robots as they move around each other in the installation, even before a human enters. From a distance, it is the kinesic channel of communication that is most obviously in use between Fish and Bird. Communication between these robots is difficult to read as a form of turn taking, appearing to be more clearly identifiable as a dynamic flow of movement, which includes moments of attention and response. As Donna Haraway suggests when discussing the communication of animals, this type of “embodied communication”, which involves the shared negotiation of space as communicators move towards, away and around each other over the course of the interaction, “is more like a dance than a word” [18]. The ‘dance-like’ interaction of Fish and Bird is accompanied by the production of fragments of text from miniature thermal printers. Each robot uses its own distinctive handwriting “assembled from digitized bitmaps of the glyphs” to write notes to the other robot and sometimes also to human visitors [36]. These messages are dropped on the floor as they are printed, and thus accumulate to create a fragmented and disordered history of their communication over the course of the day [36]. The way in which these messages are produced and then collect on the floor adds a sense of history to the dynamic communication between these robots without producing a definitive narrative.

In terms of their interactions with humans, one of the first, and strongest, signals of the perceptual and responsive abilities of Fish and Bird is the way that they both turn to face people entering the installation space. In contrast with Petit Mal, with its recognisable head and sensors resembling a bank of ‘eyes’, as already mentioned Fish and Bird certainly do not have discernible eyes. The impact of their gaze is therefore only presented through their turning movement; however, it is possible that the feeling of being ‘watched’ by these robots is emphasised by the way that people end up positioned at the intersection of their ‘gazes’. In addition, because the robots have been engaged in communication with each other, the interruption caused by the entry of a person is also marked. Fish and Bird stop their ‘dance’ and turn their attention to the visitor in a way that clearly signals that the robots have noticed them, and may be willing to interact and communicate with them. It also becomes clear that Fish and Bird have individual personalities, communicated through the specificities of their movements in response to humans. Bird is the more outgoing of the two and is likely to be the first of the robots to approach human visitors, whereas Fish will often hang back to observe people from a safe distance before gradually moving closer [6].

Velonaki describes communication with Fish and Bird in terms of dialogues, which develop as the robots move around the installation space based on their understanding of the “body language of the [human] participants” who are also in the process of reacting to “the body language of the robots” [36]. However, as was suggested for Petit Mal above, it is important to recognise that the dialogue between humans and these robots is not precisely governed by turn-taking rules, but rather is more flowing and overlapping (as is the case with communication between these robots when humans are not present). This type of dynamic interaction is described by Alan Fogel as allowing “co-regulation” to arise “as part of a continuous process of communication” as opposed to being the “result of an exchange of messages borne by discrete communication signals” [12]. While this statement resonates with Penny’s idea of an “ongoing conversation”, it is more open to the contributions that all channels, in particular kinesic but also, as seen in the case of Fish and Bird, language in the form of texts, might make to the communication system as a whole.

The names of these robots, Fish and Bird, may encourage a level of zoomorphism in shaping people’s understanding of their communication through movement, based on past interactions with animals and supported by the tentative and rather nervous personalities the robots project. Indeed, even in the case of Petit Mal, Penny notes that in spite of its purposely non-anthropomorphic and non-zoomorphic design, people can only interpret the robot based on their past experience. They therefore project all sorts of motivations onto the robot to explain its behaviour, and there is evidence that people may think of non-humanoid robots as somewhat like animals or humans, but also may call upon fictional descriptions that they have read, in particular science fiction [32]. It is therefore vital that even as these robots are thought of as communicative, and interpreted in terms framed by one’s existing experience, the unusual and unexpected nature of these wheeled robots, and the clarity of their individual characters, ensures that people are continually reminded of the robots’ absolute otherness.

The communication of these robots is difficult to place in terms of sociocultural theory or sociopsychological theory. While they evoke sensations of familiarity in human visitors, their form and behaviour also causes people to question the assumptions that they make about the characters of these robots constantly, in particular in relation to them being like someone or something encountered in the past. Instead, the communication of Petit Mal, as well as Fish and Bird, is more easily analysed in terms of phenomenological theory and the Levinasian conception of “the face to face” [19]. This understanding highlights the importance of recognising the specific differences of each of the robots involved in interactions, and suggests that by meeting strange robots people may gain some insight into the possibilities of overtly different others in communication. In fact, meetings with the alterity of robots such as Petit Mal, Fish and Bird, would seem to illustrate Maurice Blanchot’s contention, as he reworks Levinas’ thought in The Infinite Conversation, that describing the difference between self and other in terms of “separation” or “distance” is not sufficient [1]. Rather, the revelation of otherness constitutes “[a]n interruption escaping all measure”, which Blanchot suggests should be termed “an interruption of being” [1].

The phenomenological understanding of encounters with these robots exists alongside a dynamic systems perspective, which highlights the presence of overlapping attempts to communicate. Language plays only a small part in these interactions in the form of the ‘hand written’ notes produced by Fish and Bird, whose meanings, since they are only fragments, often remain somewhat cryptic. Cybernetic theory that values accuracy in transmission of information can therefore also be set aside. In order to understand communication in the type of dynamic system described above, which forms during human interactions with Petit Mal and the Fish-Bird project, information must be reconceptualised as something that is not fixed, cannot be precisely coded and is not transmitted in any simple way. These art installations illustrate the importance of acknowledging the presence of information that is “created in the process of communication”, such that “meaning making” emerges as an outcome of the “process of engagement” between humans and robots [13]. As Penny concludes in his own consideration of Petit Mal, artworks do not “didactically supply information”; instead, there are many ways to interpret the work, and a focus on embodiment as part of communication (quite possibly in addition to verbal or written language) as well as recognising the potential for meaning to emerge during interaction, are key aspects of understanding communication in art installations [25]. This acceptance of uncertainty in communication, arising from the idea that information is not fixed and cannot be perfectly transmitted, alongside acknowledgement of many possible interpretations, can broadly be characterised as an artistic perspective on communication, which is more open to otherness than the scientific perspective discussed in relation to humanoid robots above.

Conclusion

While the creation of robotic art installations draws together the need to make artistic and aesthetic decisions alongside technical and scientific decisions, the goals of artistic endeavour do seem to be different from that of science and technology, resulting in different outcomes in terms of the robots that are designed and built. On his website, the artist Norman White, for example, expresses his interest in using creative art to ask broad questions, something that is also possible, but for him too constrained, from the perspective of ‘good science’ [37]. White’s thinking bears some similarity to that of Penny, who argues that “the holistic and open ended experimental process of artistic practice allows for expansive thinking”, such that artistic methodologies may be able to “compensate for the ‘tunnel vision’ characteristic of certain types of scientific and technical practice” [25]. While, as Penny clarifies, this is not meant to be a derogatory appraisal of the influence of science and technology on art as well as other fields of human endeavour, it is nonetheless evident in the influence that art’s expansive thinking and science’s tunnel vision can be seen to have on their respective robot designs. This chapter has considered these differences with reference to various traditions of communication theory and conceptions of the place of commonality versus otherness and difference in communication. Penny notes that his creation of Petit Mal “emerged from artistic practice and was thus concerned with subtle and evocative modes of communication rather than pragmatic goal based functions” [25]. This statement supports the sense in which this chapter has located a difference between scientific approaches to robotics, and modes of communication that are cybernetic, semiotic, sociocultural or sociopsychological, and artistic conceptions that are more open to the other’s otherness, such as those related to Levinas’ perspective on “the face to face”, as well as dynamic systems understandings that encompass uncertainty, a multitude of interpretations and the unexpected emergence of meaning during an interaction.

The differences between artistic and scientific conceptions of communication may stem from the way in which artists learn to promote “the adequate communication of (often subtle) ideas through visual cues” [25]. In fact, I would argue that the creation of art installations that support “adequate communication” involves a careful consideration of not only visual elements, but also the potential of sound and maybe even the tactile quality of a work that people might touch. Penny suggests that the ability of artists to achieve this goal is enabled by their understanding of “the complexity of images and the complexity of cultural context” [25], aspects which scientists often acknowledge, but may then try to simplify in their production of a general solution to creating a communicative robot. In contrast, as Penny notes, the goal of the artist is more often not to generalise, but rather to provide a specific solution that works within a particular context [25]. Importantly, the sense in which an art installation ‘works’ is not tied to the same understanding of success as was seen in the creation of humanoid robots, since artists acknowledge that the specific nature of the solution they proffer is open to a multitude of interpretations produced by visitors to the artwork. The acceptance of a variety of interpretations is in many ways inherent in the production of interactive art. Indeed, by making his work interactive, Ken Rinaldo explains that he hopes to encourage people to develop “active, self-determined relationships” with his art [31]. This explanation of the possibilities of interactive art is not only open to ideas of otherness and difference, but also resonates with theory that considers communication as an emergent property of systems, such that it develops between communicators, as opposed to being produced and received directly by communicators themselves.

Although the artistic practice approach to designing robots is not focused on creating machines that are completely predictable and reliable, and thus the utility and function of such robots for practical applications may be in question, the experimental breadth of art provides valuable examples of non-humanoid communicators [25]. As this chapter has demonstrated, analysing robots created in artistic contexts allows one to rethink the possibilities of interactions between communicators that are very different from one another. This is because the goals of artists more often result in situations where humans are encouraged to interact with technology in new ways, as opposed to being presented with technology designed to mimic a familiar communicative situation, such as that occurring between a human and another human. This is not to say that anthropomorphic and zoomorphic responses are not important as part of communication with an unfamiliar looking technology, but the overarching sense of meeting a strange and unfamiliar other is a constant presence, which offers people the opportunity to gain new insights into the value of otherness, and the possibilities of communication more broadly.

References

  1. 1.
    Blanchot M (1993) The infinite conversation. University of Minnesota Press, MinneapolisGoogle Scholar
  2. 2.
    Breazeal CL (2002) Regulation and entrainment in human-robot interaction. Int J Exp Robot 21:883–902CrossRefGoogle Scholar
  3. 3.
    Breazeal CL, Edsinger A, Fitzpatrick P, Scassellati B (2001) Active vision systems for sociable robots. IEEE Trans Syst Man Cybern Part A 31:443–453CrossRefGoogle Scholar
  4. 4.
    Breazeal CL, Hoffman G, Lockerd A (2004) Teaching and working with robots as a collaboration. In: Proceedings of the third international joint conference on autonomous agents and multiagent systems. pp 1030–1037Google Scholar
  5. 5.
    Carey J (1992) Communication as culture: essays on media and society. Routledge, New YorkGoogle Scholar
  6. 6.
    Centre for Social Robotics (n.d.) The fish-bird project. http://www.csr.acfr.usyd.edu.au/projects/Fish-Bird/index.htm. Accessed 1 Aug 2014
  7. 7.
    Clark D (1997) On being “the last Kantian in Nazi Germany”: dwelling with animals after Levinas. In: Ham J, Senior M (eds) Animal acts: configuring the humans in western history. Routledge, New York, pp 165–198Google Scholar
  8. 8.
    Craig RT (1999) Communication theory as a field. Commun Theory 9:119–161CrossRefGoogle Scholar
  9. 9.
    Dautenhahn K (2013) Human-Robot Interaction. In: Soegaard M, Dam RF (eds) The encyclopedia of human-computer interaction, 2nd edn. The Interaction Design Foundation, AarhusGoogle Scholar
  10. 10.
    Demers L-P (2009) Area V5. In: Process. Plant. http://www.processing-plant.com/web_csi/index.html#project=areav5. Accessed 1 Aug 2014
  11. 11.
    Derrida J (2002) The animal that therefore I am (more to follow). Crit Inquiry 28:369–418CrossRefGoogle Scholar
  12. 12.
    Fogel A (1993) Developing through relationships: origins of communication, self, and culture. Harvester Wheatsheaf, New YorkGoogle Scholar
  13. 13.
    Fogel A (2006) Dynamic systems research on interindividual communication: the transformation of meaning-making. J Dev Process 1:7–30Google Scholar
  14. 14.
    Freud S (2004) The Uncanny (1919). In: Sandner D (ed) Fantastic literature: a critical reader. Praeger, Westport Conn., pp 74–101Google Scholar
  15. 15.
    Gunkel DJ (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge, MassGoogle Scholar
  16. 16.
    Hanson D (2006) Exploring the aesthetic range for humanoid robots. Towards Social mechanisms of android science, Vancouver, pp 39–42Google Scholar
  17. 17.
    Hanson Robotics Website (n.d.) http://hanson.robotics.com/. Accessed 18 Oct 2008 (no longer available)
  18. 18.
    Haraway D (2006) Encounters with companion species: entangling dogs, baboons, philosophers, and biologists. Configurations 14:97–114. doi:10.1353/con.0.0002 CrossRefGoogle Scholar
  19. 19.
    Levinas E (1969) Totality and infinity. Duquesne University Press, PittsburghGoogle Scholar
  20. 20.
    Levinas E (1985) Ethics and infinity, 1st edn. Duquesne University Press, PittsburghGoogle Scholar
  21. 21.
    Levinas E (1990) Difficult freedom. The Athlone Press, LondonGoogle Scholar
  22. 22.
    Matsumoto N, Fujii H, Okada M (2006) Minimal design for human–agent communication. Artif Life Robot 10:49–54. doi:10.1007/s10015-005-0377-1 CrossRefGoogle Scholar
  23. 23.
    Menzel P, D’Aluiso F (2000) Robo sapiens: evolution of a new species. MIT Press, Cambridge, MassGoogle Scholar
  24. 24.
    Mori M (1970) The uncanny valley. Energy 7:33–35Google Scholar
  25. 25.
    Penny S (2000) Agents as artworks and agent design as artistic practice. In: Dautenhahn K (ed) Human cognition and social agent technology. John Benjamins, Amsterdam; [Great Britain], pp 395–413Google Scholar
  26. 26.
    Penny S (1997) Embodied cultural agents: at the intersection of robotics, cognitive science, and interactive art. AAAI Technical Report FS-97-02. AAAIGoogle Scholar
  27. 27.
    Penny S (2011) Petit Mal video. https://www.youtube.com/watch?v=v_kMOMYq0MU. Accessed 1 August 2014
  28. 28.
    Peters JD (1999) Speaking into the air: a history of the idea of communication. University of Chicago Press, Chicago, LondonCrossRefGoogle Scholar
  29. 29.
    Pinchevski A (2005) By way of interruption: Levinas and the ethics of communication. Dusquene University Press, Pittsburgh, PennsylvaniaGoogle Scholar
  30. 30.
    Poyatos F (1997) The reality of multichannel verbal-nonverbal communication in simultaneous and consecutive interpretation. In: Poyatos F (ed) Nonverbal communication and translation: new perspectives and challenges in literature, interpretation and the media. J. Benjamins, Philadelphia, pp 249–282Google Scholar
  31. 31.
    Rinaldo K (n.d.) Artist statement. http://kenrinaldo.com/frame_about.html. Accessed 1 Aug 2014
  32. 32.
    Sandry E (2015) Robots and communication. Palgrave Macmillan, New YorkGoogle Scholar
  33. 33.
    Science Gallery (2011) Human + area V Louis-Philippe Demers video. Dublin. https://www.youtube.com/watch?v=hKqhgsromfc. Accessed 1 Aug 2014
  34. 34.
    Steeves HP (2005) Lost dog, or, Levinas faces the animal. Figuring animals: essays on animal images in art, literature, philosophy, and popular culture. Palgrave Macmillan, New York pp 21–35Google Scholar
  35. 35.
    Velonaki M, Rye D (2010) Human-robot interaction in a media art environment. Workshop: What do collaborations with the arts have to say about HRI? Osaka. http://hri.willowgarage.com/workshops/HRI2010/downloads/Velonaki.pdf. Accessed 25 June 2010
  36. 36.
    Velonaki M, Scheding S, Rye D, Durrant-Whyte H (2008) Shared spaces: media art, computing, and robotics. Comput Entertain 6:1. doi:10.1145/1461999.1462003 Google Scholar
  37. 37.
    White N (n.d.) Norman T. White: a short autobiography and credo. http://www.normill.ca/ntwbio97.html. Accessed 1 Aug 2014

Copyright information

© Springer Science+Business Media Singapore 2016

Authors and Affiliations

  1. 1.Department of Internet Studies, School of Media, Culture and Creative ArtsCurtin UniversityPerthAustralia

Personalised recommendations