Robotics and Art, Computationalism and Embodiment

Chapter
Part of the Cognitive Science and Technology book series (CSAT)

Abstract

Robotic Art and related practices provide a context in which real-time computational technologies and techniques are deployed for cultural purposes. This practice brings the embodied experientiality, so central to art hard up against the tacit commitment to abstract disembodiment inherent in the computational technologies. In this essay I explore the relevance of post-cognitivist thought to robotics in general, and in particular, questions of materiality and embodiment with respect to robotic art practice—addressing philosophical, aesthetic-theoretical and technical issues.

Introduction

This essay is written from the perspective of an artist/practitioner active in the field since the mid 1980s. My own engagement with the field began with desires to utilize electronics and sensors to endow installations and kinetic sculptures with awareness and responsiveness. These desires brought me into contact with the rapidly changing landscape of computing and robotics, on both a technological and theoretical level. It was during this period, in the 1990s that it became clear to me that computational technologies were undergirded by a worldview which was fundamentally in tension with the worldview of artmaking. I do not mean this in a ‘two cultures’ sense—concerning creativity and technics- but in respect to basic ideas of embodiment and selfhood.

For me, Robotic Art and related practices of interactive sculpture and installation provided a context in which to imagine the deployment of real-time computational technologies and techniques for cultural purposes. In the process, this practice brings the embodied experientiality, so central to art, hard up against the tacit commitment to abstract disembodiment inherent in the computational technologies. This process pushed the technologies in ways they didn’t always want to go, and often necessitated designing and building systems from the ground up, in projects like Petit Mal (see below). On the other hand, it was in robotics (reactive, bottom up and action-oriented) that the traditional AI conceptions of representation and planning demonstrably failed, and were supplanted by various on-the-fly approaches ‘Fast, cheap and out of control’, the title if a film by Errol Morris, captures the attitude of this work, which was iconoclastic, with respect to conventional AI based robotics.1 In this essay I will explore the relevance of post-cognitivist thought to robotics in general, and in particular, questions of materiality and embodiment with respect to robotic art practice—delving into philosophical, and aesthetic-theoretical issues as well as technical issues.

Then and Now

After a two-decade hiatus, robotics is again a hot topic. This is in large part due to the maturing of basic technologies, their miniaturization and mass production. It has to do also with the newsiness of Japanese anthropomorphic and zoomorphic robots, of quad-copters and UAVs (drones), the very visible investment in the field by Google, and its development of driverless cars. In the 1990s, media arts practices and the technologies themselves were primitive and developing rapidly. Some modalities, such as Virtual Reality, stalled in the late 90s as the Silicon Graphics computational behemoths were eclipsed by PC and internet based practices. But as the underlying technologies became cheaper, faster and smaller, the same ideas are returning as viable commodities, for instance the Oculus Rift, and the recently demised Google Glass.

The case is similar for robotics technologies, where the availability of user friendly microcontrollers (such as the Arduino) and sophisticated miniaturized sensors (such as MEMS accelerometers and IMUs—Integrated Motion Units) has obviated basic hardware engineering tasks. In 1970, the video camera on the Shakey robot at Stanford cost $50,000 (Fig. 1—Shakey). Today you can buy a far more sophisticated webcam for $2.99. Similarly, in accordance with Moore’s law, the entire range of robotics technologies has become orders of magnitude more sophisticated and orders of magnitude cheaper: lithium ion batteries, powerful miniature motors deploying rare-earth magnets, sensors of all sorts, and vastly more capable processors and memory technologies.
Fig. 1

Shakey. Stanford Research Institute 1966–72. http://www.ai.sri.com/shakey/

Robots, Telerobots, Prosthetics and Machine Tools

In my view, robotics, as field, is characterised by two qualities. First: it involves the design of behavior; and second: it bridges the gap between the immaterial world of computing and code, and the exigencies of materiality. These two defining qualities place it in an important set of relationships with art as traditionally understood.

Fundamental to my conception of what a robot is, is the capacity for sensing and self guided behavior. In my opinion, as its quality of self-guidance declines, so does its claim to the moniker ‘robot’. We might frame the field of robotics in terms of a set of binaries, vectors in the state-space of robotics. These might include:
  • anthropomorphic/machine tool;

  • pop literary culture/engineering;

  • prosthetic end effector/autonomous sensing;

  • flesh/metal-plastic; and

  • localised/distributed.

Technically speaking, once we dispense with the frippery of anthropomorphic robotics, a robot is a self-guiding machine tool. In many cases, industrial robots perform preprogrammed tasks without sensors and real time control. In the same way that Artificial Intelligence should less sensationally be called ‘automated reasoning’, the use of robots for remote tasks (planetary, deep sea, robotic surgery) should more accurately be named tele-prosthetics, not telerobotics. Systems of bodily augmentation and extension—exoskeletons and the like—are cyborgian constructions as opposed to robots proper. This distinction is not to diminish consideration of the cyborgian condition, which is at least as important as robotics per se.

In the C21st, the division between an autonomous device and an effector prosthetic—for instance the teleoperated arms for moving nuclear fuel rods, or what was once referred to in military research circles as a ‘force amplifier’—is now blurred. We are surrounded by quasi-intelligent machines whose control systems are partially under human control, and partially autonomous. The modern automobile is a case in point. With sensors and microcontrollers deployed ubiquitously, the notion that the driver has direct control is a fiction carefully constructed by the designers. The car senses human (driver) actions and interprets them, just as it senses and interprets oxygen levels, tire pressure and braking behavior. In this period of ubiquitous computing, digital networking (once called telematics) increasingly permeates almost all technologies—the ‘internet of things’. The UAV or ‘Drone’ is a spectacular example, linked in real time by satellite communications to soldiers in underground bunkers on the other side of the planet. More benign and domestic examples surround us, such as the increasing presence of internet in cars. The notion of a freestanding autonomous machine or robot becomes increasingly untenable.

At the same time the blurring of control between the machine and the biological is increasingly mirrored by a blending of bodies and machines. Ezra Pound said ‘artists are the antennae of the race’.2 Stelarc has been such an antenna over decades, performatively presenting or modeling such networked and fleshy robots, rom the Third Hand to the Fractal Flesh and Split Body performances (Fig. 2—Split Body), to the Ping Body, Parasite performances and to the more recent exoskeleton machines. The collaborative project Silent Barrage uses a culture of rat neurons in Atlanta to control a robotic installation in Australia.3 Given these levels of complexity, it is technically naïve to refer to a simple powered machine without sensor feedback loops as being a ‘robot’. In the same way terminology like ‘interactivity’, ‘digital art’ and ‘new media’ now seem decidedly quaint, it may be anachronistic to call anything a robot anymore.
Fig. 2

Stelarc: Split Body—Voltage-in / Voltage-Out Galerie Kapelica, Ljubljana 1996, Photographer—Igor Andjelic

Materiality and Representation

Human construction of increasingly abstracted techniques of representation has developed and has accelerated over recent centuries. Image making, speech, then writing—possibly in that order constituted, or at least signaled, our break with our primate cousins in the Paleolithic [11]. Portable but archival documentation of writing (the book and the scroll) ushered in a second stage of representational systems which have culminated in our time in electrical communication and representation systems. In the process, the abstract, even disembodied, nature of ‘information’ has become increasingly valorized. But we must add two caveats. First, these are representational systems, and second, they all remain dependent on biological sensing processes. The brain is material and biological ‘all the way down.’ There is no ‘information’ in the brain, in the sense of digital bits. Every so often some report in neurological research claims to have identified computational elements, bits, ‘data’ or Boolean operations. That is to be expected, given the enormous complexity of the brain, but it is a red herring. The brain may not have information but it does have procedures. It is a wondrously dynamical resonating adapting thing, more akin in its behavior to pre-digital cybernetic models [19] than the linear seriality of the von Neumann machine or the automated reasoning of the Physical Symbol System [13]. The extropian dreams of direct neural jacks and of the passing of ‘pure information’ into brains from computers seems motivated by a bizarre kind of body-loathing, more Christian than futuristic.

A ‘robot’, from these perspectives, is an ontological paradox. It is a materially instantiated thing (as opposed to an image, a representation). It operates in the world as a quasi-biological entity, and we experience it in the way we experience animate things in the world—as something that is ‘moving towards me’, ‘scurrying around’ or ‘trying to achieve a goal’. It is also, in some sense, a representation. And it carries and acts upon representations—or at least some do: the reactive robots prototyped by Brooks, Steels et al. eschewed representation. As Rodney Brooks famously said “The world is its own best model” [1].

Yet representation is itself a relational concept. Like the falling tree in the forest, a poem or a street sign are not representations without a perceiver who is already trained in the deciphering of such representations. So representation requires prior cultural consensus, at least between two people (say an artist and a viewer). Without this, a representation is simply another thing in the world, open to interpretation. To a horse, Leonardo’s last supper is presumably just a wall dappled with color.

Art and Robotics

Art—if one can say anything general about it—is about making things immediate and sensorial, heightening affect through artful manipulation of tangible qualities. It is not a theoretical postulation. It is not an equation or an algorithm, it is tangible, embodied, experiential and performative. Material instantiation is a central quality of art. While some radical conceptualists have contested this, it is the exception that proves the rule [14]. The way that art ‘means’ is in the normal way that (physical) things come to have meaning to people—through embodied experienced. Such experience occurs via the normal equipment of the human animal, specifically the senses and sensori-motor loops.

In my opinion, the central theoretical problem of the era of digital art has been the radical opposition between the culture of computing and the culture of the arts on this very matter. The former espouses the virtues of generality and abstraction, a platonic world outside matter and time. The latter espouses the opposite, the specificity of experience and material instantiation; relationality with human scale and human experience. This is why robotic art is so important. It is a fulcrum between the abstraction of computing and the situated materiality of art. It’s no wonder then that thinking artists who engaged computing in the 90s were confounded by the implicit assumptions in computer software and systems. By the same token, art goals were incomprehensible to computer scientists and engineers. As Billy Kluver remarked in a 1966 Life magazine article, “All of the art projects that I have worked on have at least one thing in common: from an engineer’s point of view they are ridiculous.” And it is no wonder that so few have successfully bridged the gap.

The second way that robotic art has been so crucial in the development of digital arts practices is that robotics implies the design of modalities interaction, and the necessity for a theorization of such. But more importantly, it encompasses that field in a larger territory—the aesthetics of behavior. Robots live in the world and must survive by their ‘wits’—the effectiveness of the decisions they make on the basis of the data they collect via their sensors—and success is pragmatically measurable by the normal criteria of engineering: efficiency, optimality, speed, safety, survival. The behavior of robotic artworks must also be designed, but the criteria for such design—an aesthetics of behavior—remains a nascent field. Like other computer based generative art practice to which it is related, robotic art is a meta-creative practice [24]. The design of genetic algorithms and fitness landscapes involves the creation of an armature upon which emergent behavior may take place. While commercial robots, like commercial software, are generally not expected to surprise us, works of emergent art are. That is what we mean by emergence [6].

Cybernetics, Artificial Intelligence and Robotic Art

As I have discussed elsewhere [15], Robotic Art has existed since the mid twentieth century. Pioneering work in the field was already occurring in the decade after the second world war, with such landmark projects as Nicholas Schoffer’s CYSP works (Fig. 3 CYSP) Grey Walter's Turtles, and Gordon Pask’s Musicolor. The emergence of machine art, and cybernetic art in the postwar period was due to a combination of factors. The second world war had generated huge advances in electromechanical technologies and technologies of control: electronics had developed rapidly to encompass radar, analog computing and the development of semi-autonomous and self-guiding machines for the war. In the late 40s and 50s, the availability of war surplus electromechanical hardware influenced many fields. As Paul Virilio has shown, the availability of 16 mm film cameras—originally developed for use on bombers—led to the French new wave filmalking [23], American animator John Whitney made his animation machines from bombsight hardware and cyberneticians such a Ross Ashby, Grey Walter and Gordon Pask built their cybernetic machines from military surplus materials. Yannis Xenakis used electromechanical control systems for his polytopes. A decade later, Edward Ihanotwicz’ utlilised war surplus radar hardware for Senster (Fig. 4 Senster). Over that period, electronic technology developed rapidly towards the integrated circuit ‘chip’ through major phases of vacuum tube technology and discrete semiconductor (transistor) technology.
Fig. 3

CYSP 1. Nicolas Schöffer, 1956

Fig. 4

Senster. Edward Ihnatowicz. (Image courtesy Richard Inhatowicz.)

For the first two decades, the ethos of cybernetics as an ur-discipline of feedback and control was the main theoretical driver of robotics. Robotic art and the entire ‘art and technology’ movement emerged within the theoretical context of cybernetics. For cybernetics, biology and ecology were taken as models, emergent and self-organising capacities were of special interest and cognitive success was determined by (successful) adaptation. Cybernetic concepts of feedback and homeostasis were framed by a conception of the integration of an agent with the environment. The concept of ‘control’ has been assumed to be synonymous with cybernetics, and as a result, simplistic interpretations have cast cybernetics in an ominous light. Control Theory emerged from this community, however, ‘control’ was understood not so much as heavy handed and hegemonistic, but in the sense of a management of status with respect to environmental changes.

Behaviorism, which characterised postwar psychology, eschewed internalism because it was deemed to be unscientific, the territory of philosophy. The ethos of Cybernetics was sympathetic to behaviorism in the sense that it was preoccupied with the presence of, and adaptation by, an agent in an environment. As characterized by the ‘black box’ doctrine, delving into inner workings of the brain/mind was not encouraged. (The pioneering work of McCulloch and Pitts in neural networks shows that this was not a universal characteristic).

By the early 70s, a different theory of control and communication, in many ways the antithesis of the cybernetic vision, was on the rise. The functionalist-internalist-computationalist paradigm of Artificial Intelligence was seen as a principled way of moving beyond behaviorism. While Cybernetics had been preoccupied with relations between an entity and its environment, considered in terms of ‘feedback loops’, AI was concerned almost exclusively with reasoning inside the black box: reasoning defined in terms of Boolean logical operations on symbols; with the construction of internal representations and with planning with respect to them. This was characterized as the SMPA (Sense Map Plan Act) approach. The question of how the symbols got there was regarded as tangential, and the possibility of ongoing loops of action in the world without the necessity of internal representation was unimaginable within the paradigm. In a classic Hegelian synthesis, in the late 1980s, the Artificial Life movement emerged out of this tension, at the very moment of the emergence of digital arts.

An Autobiographical Interlude—Petit Mal

Petit Malan Autonomous Robotic Artwork (begun in 1989 and first exhibited in 1995)4 sought to move interaction off the desktop, out of the shutter-glasses and into the physically embodied and social world (Fig. 5 Petit Mal).
Fig. 5

Petit Mal Shown here at Smile Machines exhibition, curated by Anne Marie Duguet at Transmediale 2006, Berlin. Photograph by Simon Penny

Petit Mal arose at the confluence of embodied art practice, artificial life, and the cognitivist crisis. The focus was on the bodily experience of the ‘user’ in the context of behaving installations, and on the construction of a fluid relation between bodily dynamics and technological effects.

The sole function of ‘Petit Mal’ was to engage visitors in large-scale bodily interaction—a dance. I undertook the task of building a robust mobile autonomous machine for cultural purposes—the goals of Petit Mal, apart from the obvious one of building an autonomous mobile robot which was an artwork, were:
  • to build an autonomous human scaled machine which was perceived as an active intelligence, but which did not resort to anthropomorphism or zoomorphism—at least not in its form, though its behavior is zoomorphic. Leafing through an Edwards Scientific catalog recently I saw any number of relatively simple mechanical toys designated ‘robots’ due solely to the application of self-adhesive plastic googley eyes. This was precisely what I wanted to avoid.

  • to build a computational machine for which the interface was entirely gestural, bodily and kinesthetic, in which there was no textual or iconic interface, no buttons or menus, keyboards or mice, no screens or codes of flashing lights.

  • to build a behaving machine that elicited play behavior among people. Petit Mal implemented a non-instrumental kind of ‘play’ which is quite incommensurable with conventional computer-game logic of competition, numerical scoring and ‘levels’ which has more to do with rationalised industrial labor than with play [17].

  • to provide a working example of a situated and reactive robot, providing a physical and performative critique of conventional AI approaches to robot control and navigation. Midway through this project I became aware that my research agenda, arising substantially out of art interests, was consistent with progressive thinking in robotics, cognitive science and AI. I found that my intuitions about behavior programming was consonant with the bottom-up and reactive robotics work of Brooks, Steels and others [1, 2, 3, 4], etc.). I came to see Petit Mal, technically, as a vindication of a ‘reactive’ robotics strategy and a critique of conventional AI based robotics, as well as an experiment in artificial sociality.

The motivation to interact with Petit Mal seemed driven by curiosity. People willingly and quickly adjusted their behavior and pacing to extract as much behavior from the device as possible, motivated entirely by pleasure and curiosity. (Interestingly, the only demographic who were unwilling to interact were adolescents). Petit Mal often elicited assumptions that the thing was more clever than it really was. My emphasis on engagement of the user in a situated and embodied way was consistent with contemporary critiques of AI [7, 8, 20]. These critiques put more traditional notions of intelligence as the logical manipulation of symbols in some abstract reasoning space under some pressure. New ideas about embodied and situated cognition were coming to light in work such as Lucy Suchman’s Plans and Situated Actions; Varela, Thomson and Rosch’s Embodied Mind; and Edwin Hutchins’ Cognition in the Wild [10, 21, 22]. These works variously contested ‘internalist’ views of cognition, showing cognition as being dynamical and contextualised, facilitated by tools, procedures and human interactions.

The context in which Petit Mal was developed is significant. I had already begun the project when I took up a cross-disciplinary position at Carnegie Mellon University as Professor of Art and Robotics in 1993. I brought to that context my experience in installation, performance, and machine sculpture, along with substantial experience in designing performative technologies and persuasive sensorial experience, and more subtly, with predictions regarding the cloud of cultural associations which might be elicited by a particular set of cues, materials, gestures and references.

The period of development of Petit Mal was crucial to the development of my understanding of the engineering realities of robotics and the development of my critique of cognitivism. I was fortunate to have had the opportunity to move in circles with leading roboticists and to come to terms first-hand with the technical realities and motivations of robotics. I began to recognize that my experience in creating materially instantiated sensorially affective (art)work provided me with a different approach to robotics, compared to many in the Robotics Institute whose backgrounds were in computer science and engineering. When the term ‘socially intelligent agents’ was abroad in AI circles in the late 90s, I coined the term ‘culturally intelligent agents’, and when affective computing became a buzz word in that world, my response was a forehead-slapping “well duh!” [18].

Given the available technology of the time, and the unusual nature of the project, I had to design mechanics, electro-mechanics, computational hardware and software at a comparatively low level. Petit Mal used a combination of ultrasonic and pyro-electric sensors to locate people. I designed and built my own sonar drive circuitry, and pyro-electric sensor array, motor drive circuitry, brake system and rotary encoders, each of which took weeks or months to design, source components, prototype and test. I managed mechanical reliability, power budget and charging issues so that the device could function robustly with the public in a large environment for 10–12 h a day. This was a significant achievement for any robot at the time. Most research robots—funded by large development budgets—ran for a small fraction of that between ‘downtime’.

Petit Mal, Affect and Embodiment

One of the conversations about Petit Mal, as persistent 20 years later as when it was first shown, centers on questions of empathy and the evocation of affect. It is constantly observed that people interacting with Petit Mal quickly develop an almost affectionate relationship with the device. While many interactive applications, even embodied systems (such as the Kinect) induce involvement or engagement, they seldom induce a sense of care or concern for characters, agents etc., even in the case of digital pets. My project Fugitive in this context offers a control for the experiment, because the behaviors of Petit Mal and of the agent in Fugitive, are essentially very similar.5 Yet as engaged as users become with Fugitive, often exhausting themselves running about, they never, in my experience, develop affection of the order induced by Petit Mal.

One might also compare Petit Mal to the much more recent, dynamically and behaviorally sophisticated 3D agent ‘Sniff’(Karolina Sobecka and James George 2009).6 Sniff, a virtual pup, deploys persuasive dogginess in its modeling, animation and behaviors. In a sophisticated aesthetic choice, Sniff is presented in wire frame (Fig. 6 Sniff). This was probably a wise decision, as lifelike texture mapping would drag it into the ‘uncanny valley’ [12]. While naturalistic and beguiling, Sniff remains a screenal representation of a cute dog. One wonders what kinds of responses Sniff would induce if encountered in an embodied immersive environment like the CAVE.7 More germane to the comparison with Petit Mal, one might also ask how an audience might respond to Sniff’s behavioral repertoire grafted onto a stick figure, or a ball.
Fig. 6

Sniff. Karolina Sobecka and James George 2009. Photograph courtesy of the artists

What could it be about Petit Mal that induces empathy? The first and most obvious observation is that it is materially instantiated. As simple and self-evident as this fact is, in our obsessively screen- and image-oriented digital culture it seems necessary to remind ourselves of basic neuro-developmental realities—that as material creatures in the world, the significance of material realities is fundamental, and both historically and perceptually precedes image and text, these representational cultural modalities. Things can hurt us, and we can exploit things to protect ourselves. Things can eat us and we can eat things. We distinguish between the living and the non-living, between the autonomously flying as opposed to the simply falling, instantaneously.

Petit Mal is not zoomorphic in its physical form. As noted, this was an explicit intention of the project. But its behaviors, its dynamics, are zoomorphic. Petit Mal performs liveliness. Were Petit Mal twice or half the size, different emotions would come into play. Physical size plays an important role. Petit Mal is child or pet-sized—probably not big enough to be dangerous, a quality reinforced by its spindliness. Its movements are hesitant and not intimidating. So although physical instantiation is fundamental to the inducing of empathy, the specific qualities of that embodiment, as expressed in physical form and dynamics, ensure it.

Computationalism and Embodiment

A generation after Dreyfus’s phenomenological exegesis in ‘What computers can’t do’ [7] and the demise of Good Old Fashioned AI (GOFAI) [9], one still hears excited conversation regarding the purported ‘singularity’ when computational ‘intelligence’ exceeds human intelligence.8 The conception of intelligence which makes the notion of singularity even possible is thoroughly dependent on the idea that the requirements for thinking, or intelligent action in the world, are satisfied by the Physical Symbol System Hypothesis. The circularity of reasoning which permits such a concept we might call the ‘Deep Blue fallacy’. In line with the commitment to symbolic reasoning in AI, Chess playing had been taken as a test case of human intellectual achievement, so when deep blue beat chess grand master Kasaparov, AI was deemed to have succeeded. But inasmuch as chess is a game which can be entirely described in a set of mutually consistent logical rules, with no necessity for disambiguating the world, it is isomorphic with AI itself.

Thus, the fact that a computer can play chess is unsurprising. Real world tasks, such as perfecting a recipe for chocolate cake, are in fact much more demanding, possibly outside the capability of AI. The failure of GOFAI was rooted in the insurmountable difficulties in coordination of information systems with the real, lived physical world ‘out there’. In hindsight, it should not have been a surprise that an automation of Victorian mathematical logic was neither necessary nor sufficient to equip a synthetic organism to cope in the world, but such was the hubris of the field. In this history we see AI cast not so much as a futuristic but as anachronistic.

According to the Sense Map Plan Act (SMPA) paradigm of conventional AI, robots operate in the world via a serial von-Neumann process of input, processing and output. This construction owes more to mechanistic models such as the industrial production line than biological, ecological or enactive models. Internally, according to this model, perception is separate from action, separated by information processing, in a linear one-way process. The sensor and effector ends of the process are referred to, significantly, as ‘peripherals’ and serve the function of transduction into and out of digital representations. This conception reproduces an enlightenment individual autonomy, and eschews consideration of community, intersubjectivity, agency, feedback, adaptation, autopoiesis, or enactive conceptions of cognition.

It is important to recognize that however powerful localized or distributed digital computer systems are, they can only make meaningful interventions in the world by virtue of functional interfaces with the world. The negotiation of atoms into bits is by no means as facile as the notion of analog to digital conversion would imply. We must note that in the context of, say music technology, this conversion is from voltages or waveforms to bits. As such, although it is continuous as opposed to discrete, the data already exists in a quasi-numerical form. The problem is of an entirely different order when the task is the discernment of salient features of a complex, heterogenous and noisy electrophysical world. Not only might salience exist in differing electrophysical phenomena, varying by amplitude, frequency or any number of other more complex variables, but the task of building symbolic representations upon which computation can take place is potentially far more complex than the computation itself. And if ‘sensing’ requires intelligence, and is not a trivial matter of analog to digital conversion. If this is the case, then the von Neumann architecture is fallacious. As such, intelligence in a machine cannot be limited to its processor. To expand the vision further, the behavior of a machine—that is, its successful negotiation of tasks in an environment—demands a synchronisation of structural, electromechancial, sensing and computational elements. Thus its ‘intelligence is manifested in the interaction of digital reasoning, sensor functions, and material aspects The ethos of ‘platform independence’ does not apply. There is always a sensitive interdependence between these aspects of the system. Code must be informed by and constrained by physical form and dynamics. Hence the ‘intelligence’ of any robot is in part in its non-computational embodiment.

What Does It Mean to Do Robotic Art Now?

As robotic technologies become increasingly cheap, available and user-friendly, it is no surprise that we more commonly see artworks incorporating ‘robotic’ elements. Yet often, that roboric capability is deployed in fairly familiar and formulaic ways. It is something of an embarrassment to recognize that in robotic art and interactive art generally, interaction schemes have not advanced much since the pioneering work of Grey Walter, especially given the explosion in computational capability over the last half century.9 At this juncture, we can see robotic art bifurcating in the way that interactive art has bifurcated. On the one hand, we see modalities and genres of interaction stabilized to the point that they recede into the cognitive background and simply support the promulgation of ‘content’. Modalities of web interaction, of games, and of avatar spaces such a second life fall into this category. Other work continues to pursue a formal aesthetic inquiry into modalities of interaction, foregrounding the interaction itself. The same is true in robotic art. As robotic technologies increasingly become consumer commodities, the choice to deploy a robotic approach will be a design decision. On the other hand, there is plenty of room for work which reflexively interrogates the phenomenon of the quasi-biological machine.

The realms of social robotics and culturally intelligent agents offer expansive opportunities for such research. Utopian and distopian visions of a robotic future remain a rich territory for exploration, as indicated by the uncanny eroticism of Jordan Wolfson’s sexy robot dancer “(female figure)” shown at David Zwirner gallery, New York, 2014.10 While this work is uncanny and thought provoking, it is an animatronic puppet, not a robot in the sense we have been discussing. It straddles two cultural forms, the C17th automata and the various robotically enhanced sex dolls which are easy to find on the internet. As such it not only reminds us of how uncanny the automata of Jaquet Drosz must have been in their day (Fig. 7—Scribe).
Fig. 7

Scribe. Built by Pierre Jaquet-Droz, Henri-Louis Jaquet-Droz, and Jean-Frédéric Leschot between 1768 and 1774. Musée d’Art et d’Histoire of Neuchâtel, Switzerland

It is worth observing that those extraordinary machines were never accorded status as art—then or now—but remained novelties. Jack Burnham called kinetic sculpture ‘the unrequited art’ [5]. We can see a consistent conservatism in the art world which hews to the static work and the contemplative mode or consumption. Until recently, the art world has shied away from consideration of all kinds of dynamical new media practices, screen based as well as robotic. This I think has to do with the radical ontological shift inherent in these forms, which are performative as opposed to representational [16].

But Wolfson’s work is transgressive on the plane of polite acceptability as well, standing as it does uncomfortably between art and the pornographic. Sex is endlessly interesting to humans of course, and thus it is a constant subject for art, including robotic art. A much older project which deals with much the same issues in a more handcrafted style, is Them Fuckin’ Robots by Laura Kikauka and Norman White, of 1989. More recently Sexed Robots by Paul Granjon, of 2005, adds genitalia and sexual behavior to devices very reminiscent of Grey Walter’s Turtles.11

Conclusion

Robotic art challenges art traditions in one way and new media art in another. The challenge to art is around questions of an aesthetics of behavior and the shift to a performative ontology. The challenge to digital art is to give up the implicit Cartesianism in the fictions of disembodied information, and to grapple with materiality and embodiment again. In order to make way where previous agendas ran afoul, well informed robotic art research must be cognizant of the collapse of computationalist constructs of AI which are predicated upon a fictitious division between mind and body, information and matter, software and hardware. By the same token, such research agendas must pay attention to the new theorisation rooted in artificial life and post-cognitivist cognitive science. That is, it must take questions of materiality and embodiment seriously.

Footnotes

  1. 1.

    Fast, cheap and out of control’ Errol Morris, 1997, featured Australian roboticst Rodney Brooks, among others.

  2. 2.

    He continued “but the bullet-headed many will never learn to trust their great artists.” Instigations of Ezra Pound (1967).

  3. 3.

    Silent Barrage. https://vimeo.com/5620739, accessed 6 June 2014.

  4. 4.

    As with any long-term project, there is a variety of milestone dates for Petit Mal. The project was designed and the aluminium frame constructed in 1989. The major sensor and electro-mechanical parts (sensor head, motor-wheel system in the ensuing couple of years, and simple solutions to control electronics were made. In 1993, the GCB (68hc11 based) microcontroller was introduced to the system and serious software development and testing ensued.

  5. 5.

    simonpenny.net.

  6. 6.
  7. 7.

    The CAVE, a recursive acronym for Cave Automatic Virtual Environment, was an arrangement of (usually four) stereographic projection screens arranged as sides of a cube surrounding the user, who wore shutter glasses and whose position and gaze orientation was tracked, usually with Polhemus magnetic sensors.

  8. 8.

    The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958. Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann's classic The Computer and the Brain.

  9. 9.

    British neuroscientist and cybernetician Grey Walter famously built two simple autonomous robots, Elmer and Elsie, in the late 1940s.

  10. 10.
  11. 11.

References

  1. 1.
    Brooks R (1990) Elephants don’t play chess. Robot Auton Syst 6(1990):3–15CrossRefGoogle Scholar
  2. 2.
    Brooks R (1985) A robust layered control system for a mobile robot. Massachusetts Institute of Technology, Artificial Intelligence Laboratory, A.I. Memo No. 864Google Scholar
  3. 3.
    Brooks R (1991a) Intelligence without reason. Massachusetts Institute of Technology, Artificial Intelligence Laboratory, A.I. Memo No. 1293Google Scholar
  4. 4.
    Brooks R (1991b) Intelligence without representation. Artif Intell J (47):139–159Google Scholar
  5. 5.
    Burnham J (1968) Beyond modern sculpture; the effects of science and technology on the sculpture of this century. G. Braziller, New YorkGoogle Scholar
  6. 6.
    Cariani P (1991). Emergence and artificial life. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life II. Sante Fe Institute Studies in the Sciences of Complexity, vol X. Addison-Wesley, Reading, pp 775–798Google Scholar
  7. 7.
    Dreyfus HL (1972) What computers can’t do: a critique of artificial reason. Harper & Row, New YorkGoogle Scholar
  8. 8.
    Harnad S (1990) The symbol grounding problem. Phys D 42(1990):335–346CrossRefGoogle Scholar
  9. 9.
    Haugeland J (1985) Artificial intelligence: the very idea. Bradford/MIT Press, CambridgeGoogle Scholar
  10. 10.
    Hutchins E (1996) Cognition in the wild. MIT Press, CambridgeGoogle Scholar
  11. 11.
    Malafouris L (2007) Before and beyond representation: towards an enactive conception of the palaeolithic image. In: Renfrew C, Morley I (eds) Image and imagination: a global history prehistory of figurative representation. The McDonald Institute for Archaeological Research, Cambridge, pp 287–300Google Scholar
  12. 12.
    Mori M (1970) The Uncanny Valley (trans: MacDorman KF, Minato T). Energy, 7(4), pp 33–35Google Scholar
  13. 13.
    Newell A, Simon HA (1976) Computer science as empirical inquiry: symbols and search. Commun ACM 19.3:113–126Google Scholar
  14. 14.
    Penny S (1987) Simulation, digitization, interaction: the impact of computing on the arts, Artlink V7 #3,4. Art+Tech issueGoogle Scholar
  15. 15.
    Penny S (1989a) Art practice in the age of the thinking machine. Performance 56/7.UKGoogle Scholar
  16. 16.
    Penny S (1989b) Charlie Chaplin, Stelarc and the future of humanity. Artlink V9#1 1989Google Scholar
  17. 17.
    Penny S (1995) Paradigms in collision, a tentative taxonomy of interactive art in Schöne Neue Welten. In: Rötzer F (ed) pub Boer, GermanyGoogle Scholar
  18. 18.
    Penny S (1999) Agents as artworks and agent design as artistic practice. In Dautenhahn K (ed) Human cognition and social agent technology, John Benjamins Publishing CompanyGoogle Scholar
  19. 19.
    Pickering A (2010) The cybernetic brain. University of Chicago Press, ChichesterGoogle Scholar
  20. 20.
    Searle J (1980) Minds, brains, and programs. Behav Brain Sci 3(3):417–457Google Scholar
  21. 21.
    Suchman L (1987) Plans and situated actions: the problem of human-machine communication. Cambridge University Press, Cambridge [Cambridgeshire]; New YorkGoogle Scholar
  22. 22.
    Varela FJ, Thompson E, Rosch E (1991) The embodied mind: cognitive science and human experience. MIT Press, Cambridge, MassGoogle Scholar
  23. 23.
    Virilio P (1986) War and cinema: the logistics of perception. VersoGoogle Scholar
  24. 24.
    Whitelaw M (2006) Metacreation: art and artificial life, The MIT PressGoogle Scholar

Copyright information

© Springer Science+Business Media Singapore 2016

Authors and Affiliations

  1. 1.University of CaliforniaIrvineUSA

Personalised recommendations