Still and Useless: The Ultimate Automaton
Robots descend from the long genealogy of automata, machines with no practical purposes essentially meant to simulate objects embedded with an anima. Our hypothesis is that the thrust for the creation of every robot is rooted in the primordial myth of infusing inanimate matter with the breath of life: the aim of any automaton is to become a living thing. The ultimate automaton does not need to move or to do anything: the essence of any robot lies in the desire to simulate life to the point where it actually becomes alive. This chapter presents the Aerostabile research-creation program, which progressively evolved from an architectural origin to a research platform for exploring the nature of the elements that maximizes this deliberately created illusion. It goes through the origins and main methodologies of the program, then describes several artworks that were created along its evolution, focusing on the notion of behaviour and observed interactivity.
Automata and the Art of Life-Simulation
Etymologically speaking, “automaton” thus describes a machine that can not only move or work, but also think and will, three notions that are usually associated with beings infused with a mind: conscious living beings. The oldest known automata were made for purposes that were often quite far from what we expect from contemporary robots: during Egyptian, Roman and Greek Antiquity, as well as in the Japanese Edo era, they were created in order to simulate animated or living beings, in order to infuse a sense of awe or mysticism, or simply for amusement. In most of the cases, their designers, or the people presenting them, declared that they were moved by some kind of spirit of deity.
Robots appeared in the XXth century as automata of a specific kind. As it is well known, the word “robot” appeared for the first time in the 1920 R.U.R theatre play by Czech writer Karel Čapek (though the word itself was coined by his brother Josef). It comes from the Czech word “robota”, or “worker”, itself derived from a Slavic root that means “slave”. It conveys the status of robots as machines specifically designed to compensate for humans’ limited abilities in the execution of tedious, precise, dangerous, costly or heavy tasks. Such working automata began to develop at a large scale very late in history, at the beginning of the 60s. Before that, from the Renaissance on and all along the XVIIth century, automata were created mainly to simulate complex human or animal behaviours: playing music, writing letters, playing chess, eating, and even digesting and defecating, which resulted in some of the finest mechanical pieces of all times. The idea behind such attempts was to simulate life through its most complex manifestations: the precision of the simulation would reinforce the interpretation of the machine as a living organism. Smaller automata created for pleasure or amusement became very popular during the XIXth century.
If we except water clocks, whose origin is lost in the depth of times, the first automata specifically designed for practical purposes were most likely the 13th century early timepieces. All along their history, mechanical clocks remained intimately connected with the world of automata. Some of them, like elaborated Swiss cuckoos or James Cox’s extraordinary Peacock Clock (now at the Ermitage museum in Saint-Petersburg), were associated with animated characters whose sophistication reveal their belonging to the realm of automata. Even today, complex clockworks, such as the Supercomplication watch by Henry Graves, reach tag prices of several million dollars, an amount completely disproportionate for a device whose sole function is to indicate time, but begins to make (some) sense for an automaton artwork—a device that seems to be animated by a living process.
The first mentions of robots specifically made for the execution of tasks date from the 20s. Jacquard’s looms in the XVIIIth century had several characteristics of automata, but they were powered by human beings. This was also the case for the first computing devices such as Pascal’s Pascaline or Babbage’s machines, whose mechanism was directly inspired by Jacquard’s looms. Apart from the first computing machines such as the Zuse (1941), the ENIAC or the Colossus, the first practical device that fully deserves the name “robot” seems to be General Motor’s “Unimate”, put to work in 1961. Computing machines also belong to the category of automata, but they are unable to implement any physical task; moreover, they have a unique feature that distinguishes them from all others automata: their ability to simulate themselves, and to simulate automata that replicate themselves. They can contain all the information required to produce a copy of themselves, as well as the information to produce the devices required to implement these copies. This property of self-representation/self-replication is unique and important enough to provide a precise definition for a computer; here again, it is usually associated with living organisms.
The long history of automata, joined to our fascination for self-animated machines, gives to all of them a powerful mythical stance, which can be seen as the essential cause for their very existence and proliferation. Trying to communicate with objects made from inert materials can be seen as one of the manifestations of human’s primordial will to relate with every element of the world, even the non-living ones; and to convince themselves that they are not strangers in this universe that surrounds them. The obsession for the imitation of living beings does not only appear through robots, but was for long the object of many forms of art, from sculpture and painting to architecture. Automata is the realm where our impulse for animation, which fundamentally means the process by which a soul (anima) can appear spontaneously in an artefact, expands to include movement and behaviour. It is directly related to a wealth of ancient legends in which animated beings are created from inert materials, from Prometheus to Adam, from Frankenstein to Pinocchio.
Some of these attempts may seem utterly naïve to us, but their role in the development of science and technology cannot be neglected.1 Most of our contemporary technologies are related to mythological obsessions that can be traced to the oldest Antiquity: skyscrapers (a building like a mountain—the Babel tower), planes (flying—Daedalus and Icarus), rapid prototyping machines (fairy magic wands), internet (ubiquity)… The myth of a machine that simulates a living being to a point where it can be infused with life, thus transforming its creator into a demiurge, is at the root of the genealogy of about all robots and automata. It is thus not surprising that humanoids robots remain so popular and remain the object of so much research and experiments, despite the non-adequation of the human morphology and abilities for most of the tasks we try to delegate to robots: rationally speaking, the design of a robot should be optimized in order to implement tasks that they can do better, or more efficiently, than us. Humanoids shape are seldom optimal in that respect. This might be the best demonstration of the non-rationality of all attempts at creating artificial humanoids: such experiments reveal to which extent the field of automata, despite its new techno-scientific clothes, escapes rational logics by several aspects, and remains deeply rooted in the fields of mythology, poetics, and arts.
“Robotic arts” is the most common expression to designate artistic practices in which robots are designed implemented, or even hacked for the sole purpose of producing emotions, impressions and feelings, and for creating sense and signification through events that are originally senseless. In the vast majority of the cases, such practices should be more appropriately named “arts of automata”, since very few pieces of robotic arts are actually made to implement any kind of practical task. Arts of automata can be conceptually described as the process of eliminating all pragmatic or practical functionalities from a robot, in order to create a machine whose sole purpose is to trigger empathy, fear, amusement, compassion—the whole range of human feelings and emotions, including awe, just like the first religious automata. More than any robotic technology, robotic arts are rooted in the most distant past through their similarities with such very early attempts.
Emerging Emotions, Induced Feelings
The question to know what are the features of a robot that actually triggers these feelings is mandatory for our research programs. It is a vast and important topic which is the object of an increasing number of studies.2 Obviously, robots with humanoid features have an advantage: the interpretation of their movements and facial expressions is facilitated by our own acquired knowledge and culture. Even very approximate simulations provide enough clues for an observer to find out the meaning or message they try to convey. In the medical field, several experiments use human-face robots for therapeutic purposes, e.g. for helping people with mental disorders such as autism or Asperger3: the exact repetition and predictability of their reactions creates a safety perimeter within which these patients will take the risk of attempting a relation with them. It may however be easily observed that such features are not necessary for generating emotions. For instance, most of Bill Vorn’s machines have no face,4 and adopt a heavily industrial aspect. Every element of their morphology is inspired by working robots; their appearance is often more hostile than welcoming. Their expressive power is nonetheless undeniable.
In every respect, the expressive power of an automaton depends not only on its morphology, but also on the number of configurations it can take. Each configuration (“state”), as well as every transition between these states, has the potential to trigger specific emotions in the observer. At first glance, this seemingly reductive statement can be seen as limiting their expressivity: being mechanical devices, automata cannot compete with biological organisms at the level of the number, variety, precision or subtlety of their movements. Strangely enough though, several robots with a very limited number of states reach an astonishingly high level of expressivity, despite the fact that the observer is fully aware of their artificial nature.
Such observations naturally raise the question to know which features or reactions of an artificial device are at the origin of the emotions and feelings felt by the people that interact with them. This topic has been the object of an exponentially growing number of studies in the last years, especially in the HRI community5; most of them point out to the vast number of disciplines that are involved. One of the most famous and most quoted attempt at understanding the link between human emotions and robotic morphology is already old: Mori’s model called the “Uncanny Valley” links the nature of the feelings we experiment while looking at a robot to the level of resemblance between that robot and a human being.6 Mori’s hypothesis is unfortunately plagued by blurred definitions and a strong level of empiricism, which prevents it to be really useful even for planning an experimental protocol. The feelings that are listed (“negative feelings”, “revulsion”) are too vaguely defined to even allow the possibility of a metrics, mainly because the level and nature of feelings in front of any stimuli are not observer-independent: they are inextricably linked to the cultural origin of people, and to their personal history. Numerous observations and examples show frequent cases where automata with no biological or anthropomorphic features whatsoever do trigger feelings of empathy that can be stronger than the ones triggered by human-shaped or animal-inspired artefacts. Our own observations confirmed that the behaviour and reactions of an artificial system, especially during interactive processes, are much more important than any particular morphological feature. It is from our own researches that we came to this conclusion, along the development of a research-creation program called Aerostabiles, derived from a former research program on Self-Assembling Intelligent Lighter-than-Air Structures (SAILS). Before elaborating on this point though, some information should be given on the nature, history and evolution of one of our first robotic art projects, called Paradoxical Sleep.
An Architectural Origin
The purpose of the Aerostabile program is to design and implement automata that hover in mid-air and that are able to generate flying architectures by self-assembling themselves in flight. It was born from the desire to materialize another age-old myth, this time originating from the field of architecture: the myth of a heavy mass freed from the law of gravity.7 This idea can be found in several countries all along the history of architecture. Even today, to make a building like a castle or a palace fly in mid-air with its thousands of tons of stone or concrete is everywhere seen as the manifestation of a supernatural power. Some of the oldest mythological examples are the flying vimanas (Chariots or Palaces) mentioned in Ancient India; though their mention in literature is not rigorously attested, they are still the object of a lasting fascination, and some representations show them as seven-storey high flying buildings.
Malevich believed that weightlessness constituted the highest aim of technology, and hoped that scientific advances would make free unpowered flight feasible, allowing cities to be placed as satellites floating in the cosmos.9
Without reaching such extremes, more familiar structures such as cantilever bridges or skyscrapers represent challenges to the limitations imposed by the physics of gravity and materials. Recent examples include buildings inspired from aeronautics, such as Jean Nouvel’s Guthrie Theater with its cantilever awning, or Calatrava’s opera in Valencia, where a huge, leaf-like structure seems to defy all laws of gravity and resistance of materials by seemingly hovering over an egg-shaped structure.
As a robot, the flying cube had originally no expected or intended practical use: it takes some efforts to even imagine a possible application for it. It was just meant to float still, like in a deep artificial meditation. This first suspended shape was christened “Aerostabile”, in reference to Calder’s “mobiles”, which gave the name to the whole research program. In such a work, technology becomes its own poetics. The flying automaton is only there as a being: no doing or making is involved, no action or role justifies its existence, like it would be the case for a conventional robot. No arm, clamp, leg or protrusion is even there to suggest possible uses. The planned immobility further increases this impression of uselessness: building an automata (etymologically: he who moves, will and think autonomously) that does not even move contradicts the very idea of a robot in the same way as a cubic shape contradicts the very idea of flying.
From Architecture to Artificial Beings
Conceptually speaking, the flying cubes of the Aerostabile program are automatic machines from which everything that could contribute to identify them as robots has been removed, to focus on what constitutes the symbolical essence of the automaton. No one builds a robot for the sole reason of leaving it still: stillness is a trivial and uninteresting task for a ground robot. It is not considered as its most desirable behaviour, and it is very easy to achieve: when unplugged and discharged, most robots will end up still and remain still forever. It is however quite a challenge for a hovering automata.11,12 Even when it reaches aerostatic equilibrium, several forces and influences, such as micro-atmospheric movements, convection streams, ventilation, pressure variations, concur to make it drift from its original position, to which it may never come back. To make it still requires a complex combination of physics, mechatronics and software. In order to better manage and coordinate it, we had to develop a workflow made from several parallel threads corresponding to the different expertises required, which, considering the scope of disciplines that were involved, became by itself a specificity of the project,13 and led to the development of an international cooperation. Each aerostabile is equipped with up to fourteen distance sensors, as many light sensors, a compass, an inclinometer, eight or twelve ducted fans, a series of controllers and an onboard computer. In the simplest version, the distance to the nearest walls is measured at very short time intervals. Each departure from the prescribed position is immediately rectified by a thrust from the ducted fans, the strength, duration and acceleration curve of which being precisely determined by the computer. Such repositioning processes may occur up to one hundred times a second: for a hovering object, stillness is not a state, but a dynamic process.
The counterpart of the immobility of the automaton is thus a frantic agitation of electrons in all of its circuitry, making it extremely active in an invisible way. It is from this state that the name of the installation was decided: for humans, “paradoxical sleep” is the last sleeping phase of the night, during which the brain dreams. Though the body is totally relaxed, the brain is more active than during wake time, in a direct analogy with the state of a hovering aerostabile.
Like the vast majority of technological art projects, ours did not completely work quite as expected or planned. After one year of tests and experiments, we managed to reach a quasi-still state, but the constant repositioning of the cube created small, smooth oscillations that could easily be interpreted as a form of hesitations, translating the mood of an uncertain or undecided mind rather than the appearance of pure levitation. The intermittent noise of the motors began to be interpreted as a kind of breathing. Despite all our intentions, and despite a morphology that is all but biomorphic, the flying cube was explicitly seen by many visitors as a big, clumsy animal, immersed in a deep dream or meditation. It revealed in a rather radical way that no automata can escape its interpretation as a living organism, and that any animated objects can readily trigger such interpretations and meanings. It crystallized the essential symbolical ambition of any automata, the mythological impulse without which no robot would ever have existed: to be assimilated to a living being.
The flying cubes, as well as people’s reaction to them, refute the basic claim of the Uncanny Valley hypothesis: though their morphology presents no similarity with humans or animals, even remotely, they usually elicit very positive feelings—more than several animal-like or human-like artefacts.14 The range of feelings mentioned by the visitors includes empathy, tenderness, sympathy, amusement… but fear, uneasiness or weirdness are seldom heard. Curiously enough, toddlers are strongly attracted by the cubes, and demonstrate by their movements or facial expressions a strong desire to interact with them. After their first performances, the flying cubes project evolved from their architectural origin to give birth to a complex and dense art piece about relations and artificial emotions.
Towards Hybrid Choreographies
From these observations, the idea to explore the potential of planned interactions with people quickly emerged. Several projects and works in that direction were developed during the last years, including performances in which dancers or actors developed hybrid, interactive choreographies with the cubes. Among them, some were specifically conceived to maximize the expressive ability of the automata. They encouraged our team to undertake a detailed study in order to identify the elements of their behaviour that could best convey expressions or emotions.
We first thought that these elements would be very limited in number. The cubes have no limbs or moving protrusions that can generate emotions or feelings through movement: they can only communicate through displacements of their whole body, or through the sounds they produce. In terms of movements, they have, in their first version, only four degrees of freedom: three translations (back and forth, up and down, left and right), and one rotation (around the vertical axis). This seems very few at a first glance: a human head whose expressivity would be limited to its three degrees of freedom relatively to the body would only be able to say “yes” (rotation around the left-right axis), “no” (rotation around the top-bottom axis) or “maybe” (rotation around the back-front axis). But every movement needs more than three parameters to be fully defined, and it appears more fruitful to characterize it through an analytic description which physically corresponds to the position and to its two time derivatives, speed and acceleration. To this, we add for our purposes another feature that corresponds to the acceleration curve: the oscillating rotation of a human head around a vertical axis will convey very different meanings if the oscillation rhythm is fast, slow, or if its stops after one half-rotation.15
Each of the four initial degrees of freedom is thus replaced by four new parameters, each of which requiring three sub-parameters for position (three scalars), three for orientation (three scalars), six for displacement (three vectors for speed and three for acceleration) and six for rotation (three vectors for speed and three for acceleration). This gives a total of eighteen parameters to find out where the cube is and where it is heading to. If we consider that each vector needs three components to be fully defined, and if we add that the acceleration curves for each of the acceleration vectors can itself be controlled by an arbitrary number of parameters, it is easy to see that the number of expressions that can be conveyed by a single floating cube becomes much greater than what the minimalism of its shape seems to imply.
Several research-creations experiments, as well as experimental protocols, were designed in order to identify more precisely some of the mechanisms and displacements through which the cube’s expressive potential could be expanded. They were mainly implemented on our largest cubes, 225 cm-edge aerostabiles christened the “Tryphons”.16 They all call for sequences of movements whose dynamics (amplitude, speed and acceleration) brings a key role for the visitor’s interpretation of their inner mood. For instance, a soft 2-m X translation (back-front axis, towards the visitor) does not carry the same meaning than a brisk 4-m one: the first may look like a manifestation of interest or curiosity, whereas the second can translate a threatening behaviour. A single, slow 45° oscillation around the left-right axis (horizontal and perpendicular to the visitor) may look like a greeting movement, whereas a series of short 30° oscillations around the same axis may translate a clear approbation, like the movement of head saying “yes”. A cube lying on the ground and slowly rising to about 1 m when a visitor approaches may look friendly, interested, and ready for interaction; if it rises quickly to 3 or 4 m, it may look feared. The slow movements of a cubes adjusting its position in the Paradoxical Sleep installation gives the image of a big, sleepy animal, lost in a contemplative dream; when shorter and faster, the same movements looks like a feverish tremor, translating a very nervous attitude.
The Geometry of Expressions
Basic elements of the expressive vocabulary of a flying cube
X (long.; back-to-front)
Y (transv.; left-to-right)
X-axis speed (m/s)
Y-axis speed (m/s)
Z-axis speed (m/s)
X-axis acceleration (m/s2)
Y-axis acceleration (m/s2)
Z-axis acceleration (m/s2)
X-axis rotation (deg)
Y-axis rotation (deg)
Z-axis rotation (deg)
X-axis rotational speed (deg/s)
Y-axis rotational speed (deg/s)
Z-axis rotational speed (deg/s)
X-axis rotational acceleration (deg/s2)
Y-axis rotational acceleration (deg/s2)
Z-axis rotational acceleration (deg/s2)
Practical considerations however limit this potential. First, the geometrical precision of this vocabulary can only convey the desired meaning if the cube is able to precisely follow a prescribed sequences of instructions. But a large flying cube, with its inefficient aerodynamics and its large inertia, cannot be controlled as easily as a ground object, or as a flying object with a flight-adapted geometry; its ranges of acceleration and speed are limited. Certain sequences of opposite displacements or rotations are forbidden, because of their negative impact on the stabilization and equilibrium of the automaton. Full rotations around arbitrary axis are difficult to control, since all references to external objects vary continuously. Then, the expressional or emotional interpretation of displacements and rotations is everything but an exact science. First, it strongly depends on the cultural background of the visitor 17: the rotation of the head around the back-to-front axis is interpreted as “not too sure” in the Western world, and as “yes” in the Indian subcontinent. Second, like for all interaction processes, the attitude of a visitor or performer interacting with the cube can deeply influence the interpretation of the cube’s moods by other visitors or by an audience. One of the ways we choose to explore the impact of this “cultural dialogue effect” is the implementation a software module that allows the control of the cubes by human voice, through short 3-notes melodies sung by a perfomer, or by anyone with minimal singing skills. The expressive potential of the human voice, combined to the general mood of each of these melodies (major, minor, 7th…) installs an initial atmosphere in which the reactions of the cubes take different meanings than in full silence.
Because of the number of relevant variables, the conditions of a given performance are not repeatable. To reach valid and useful conclusions, the exploration of the expressive potential of artificial beings requires a methodology that differs from what is commonly encountered in applied or fundamental sciences. From the beginning of the project, we decided to develop our research around intensive work periods called “research-creation residencies”, lasting from a few days to a few weeks, during which engineers, scientists and artists from several disciplines would work together towards the elaboration and implementation of public human-automata interactive performances. The results and conclusions of such events oriented the technological and artistic developments for the following months, up to the next residency where they could be evaluated and finalized.
The reactions of the audience were extremely diverse, ranging from amusement to anger. To our surprise however, several people tried for a rather long time to interact and speak with the cubes. Among the most intriguing moments, we saw a man who tried to teach a poem to a cube, like if he was hoping to counteract—or maybe heal—its dry, algorithmic and monosemic language through poetry; as opposed to computer code, poetry is the form of language that is opened to the largest number of potential interpretations. An old woman came several times during the three weeks of the exhibition and began to confide in the cubes, complaining for instance that she felt very alone because her children never visited her. It is hard to explain why the Rom<evo>installation, with is high-tech aesthetics, triggered behaviours usually associated with confidence or intimacy. We made the hypothesis that the artificial nature of the automata, associated with the almost complete predictability of its answers and its obvious inability to interpret, judge or criticize, created an atmosphere where some people could feel secure enough to enter into a more intimate mode of discussion.
The adaptive video projections nonetheless worked fairly well. They showed sequences from the previous evening dance shows as transformed live by Montreal VJs during after-hour performances. We however realized that the expressive potential of the cubes themselves was strongly diminished by these projections: the content of the projected sequences overwhelmed the artistic impact of the cubes, which almost disappeared as automata to become mere floating screens. Instead of being artwork by themselves, they became supports for a non-related artwork.
Like for any technological arts installation, unpredictable events occurred during these weeks. At some point, the three cubes found themselves in the same corner of the flight area. They tried desperately to avoid each other, but they were so close from the lights that no one could manage to do so: each of their displacements was sending them towards the other cubes, or towards the spotlights. The collisions that resulted, joined with the roaring and the grunting of the motors that were frantically reversing their rotation direction every few seconds, gave the impression of a fight. The cubes managed to solve the situation by themselves when one of them, through a particular interaction, was abruptly ejected from the group. It went so fast that it managed to overcome the spotlight virtual barrier and to fly over the audience towards the exit of the exhibition hall, like if he was fed up with the situation and wanted to go out.
Here again, obviously, the interpretation of the cubes’physical behaviour as resulting from intentions or emotions results from our interpretation of strictly physical, meaningless events. What deserves to be noted is the wide difference between the simplicity of the programmed behaviour and the complexity of the interpreted one: getting involved into a fight, being fed up, running away because of exasperation, are by no way simple behaviours. The experiment revealed to which extent our brain tries to make sense with everything that surrounds us and to project onto inanimate objects sets of interpretations that actually correspond to a part of ourselves.21 It shows how promptly we believe in the self-autonomy of animated artefacts, and how enthusiastically we surrender ourselves to this voluntary deception. Another anecdote is revealing in that respect: a psychiatry student came twice to see the Geometric Butterfly installation, and shared with us at length her “analysis” of the personality of the cubes: one was more extroverted, and acted as a leader; the second one had a more reserved and quiet personality, and tended to remain in the backstage; the third one was acting as a mediator who tried in its own way to reconcile the two others. What makes this analysis all the most interesting is that the three cubes were perfectly identical and identically programmed: they behaved essentially the same way.
The Floating Head Experiment
Stelarc’s talking head was not only disembodied: it was also dematerialized, and the idea to project it onto a flying cube was partially triggered by the idea of reconnecting it to a physical body.23 As a matter of fact, after decades of progressive dematerialization, the current state of automata evolution seems to imply that any machine meant to learn and evolve in the real world should be aware of the state of his environment at any moment, and to learn not only from its internal processes, but also from this environment. In order to do this, it cannot limit itself to a virtual being, communicating only with the material world through fluxes of information. Physical information coming from a perceptive body appears a primordial component of learning processes, and of the adaptation to a changing physical world.
By projecting the Talking Head onto an aerostabile, it became possible to increase its expressivity through the movements of the cube itself. An “attention model”, a clever piece of software developed by scientist Christian Kroos and engineer Damith Herath, allowed the cube to rotate towards a specific visitor while Stelarc’s face was orienting its eyes towards him, so as to increase its interaction level with him. Though rather elementary, this synergy between the two projects resulted in a haunting, strange installation, where the cube and its projected face, hovering in a dark space, looked like a levitating oracle, pronouncing prophetic sentences and answering questions about the future of intelligence, awareness and consciousness in a world were the distinction between artefacts and biological organisms is becoming more and more blurred.
Balades: A Major Art-Science-Technology Event
From the simple architectural image of a heavy mass freed from the law of gravity, the aerostabiles haves developed into a full research program that generated a series of art and technological projects, some of them being now on the verge of producing transferable applications for theatrical scene, museology, education, space studies, robotics.24 But none of them may be more surprising that his passage from an art piece, an automata that does not move and whose only skill is immobility, to a mechatronic being able and willing to interact with humans through the definition of a series of artificial emotions. The first Paradoxical Sleep installation puts technology to work; but technology here does not do anything practical, and does not create anything material. It tries to eliminate everything it is expected to do for the sake of generating a representation of itself—or rather, of its own mythical or symbolical load. From a deep, lonely meditation to an active relation with humans in which the development of hybrid choreographies becomes possible, the flying cubes exemplify these situations where technology not only enriches the potential poetics of a project, but becomes itself a poetics and an imaginary, not through what it is or what it can do, but through what it represents. Combining a rigorously calculated morphology and a radically technological geometry with the hesitations and errances of a wandering being, the aerostabiles translate the implicit fact that any automaton wishes, more than anything else, to become a living being. Indeed, a lucid attitude would make us tell that no automaton ever wished anything, and that the wish actually comes out of our own minds—we project it on inanimate artefacts. But this wish transfer is precisely at the core of every attempt at creating automata, as well as an example where lucidity may not be the most fertile attitude. For artists as well as engineers and scientists, the deliberately accepted illusion of the automaton as a living being opens exploration territories that are infinitely wider than a too strict, objective interpretation of the machine as a sheer assemblage of inanimate components.
The authors want to thank:
The School of Design at University of Quebec in Montreal
The Hexagram Institute for Research and Creation in Media Arts and Technology
The Canadian Council for the Arts
The Quebec Council for Arts and Letters
The Quebec Research Fund for Nature and Technologies
The Quebec Research Fund for Society and Culture
The Natural Science and Engineering Research Council of Canada
See Bedini  for an historical account of the intersection between automata, life simulation and technology.
See for instance Pioggia et al. .
See for instance the Mega Hysterical Machine at http://billvorn.concordia.ca/menuall.html.
Destephe et al. .
Mori, M.—The Uncanny Valley, Energy 7(4), pp. 33–35, 1970.
Reeves et al. .
Science/Technology: Laboratoire d’éthologie animale (G. Théraulaz, U. Paul Sabatier, Toulouse, France); Intelligent Autonomous System Lab (A. Winfield, U. of the West of England, UK); Collective Robotics Lab (now DISAL, A. Martinoli, EPFL, Lausanne, Switzerland; 3DVision (S. Roy, U. of Montreal, Canada). Arts: Society for Arts and Technology (SAT, L. Courchesne, Montreal, Canada); Hexagram (N. Reeves, Montreal, Canada). Researchers: P. Giguere, Laval University, Quebec, Canada; I. Sharf, G. Dudek, I. Rekleitis, U. McGill, Montreal, Canada.
Van der Zwaan et al. .
We could compare the reaction of the audience to several kinds of automata with various morphologies, including ours, in specific robotic arts events, such as the Moscow “Science as Suspense” event .
“Tryphon” comes from the first name of the famous absent-minded scientist Tryphon Tournesol, in Herge’s Adventures of Tintin. He is known as Cuthbert Calculus in the English translation.
See for instance Joose et al. .
A more detailed description of this work appears in Reeves .
Quebec city actresses Véronique Daudelin, Maryse Lapierre and Klervi Thienpont were alternatively the cube’s eyes and voice.
Ghislaine Doté from Montreal Sinha Dance company.
This concept of self-extension on inanimate things is explored through an interesting experiment by Kiesler et al. .
Kroos et al. .
Some applications are described in St-Onge et al. .
- 1.Bedini SA (1964) The role of automata in the history of technology in technology and culture, vol 5, no 1, pp 24–42Google Scholar
- 2.Bruce A, Nourbakhsh I, Simmons R (2002) The role of expressiveness and attention in human-robot interaction. In: Proceedings from the ieee international conference on robotics and automation, pp 4138–4142Google Scholar
- 4.Pioggia G, Igliozzi R, Ferro M, Ahluwalia A, Muratori F, De Rossi D (2005) An android for enhancing social skills and emotion recognition in people with autism, in neural systems and rehabilitation engineering, vol 13, 4, pp 507–515Google Scholar
- 5.Destephe M, Maruyama T, Zecca M, Hashimoto K, Takanishi A (2013) Improving the human-robot interaction through emotive movements, a special case: walking. HRI 2013:115–116Google Scholar
- 6.Reeves N, Nembrini J, Poncet E et al (2005) Mascarillons—flying swarm intelligence for architectural research. IEEE Swarm Intell Symp 2005:225–232Google Scholar
- 7.Cooke C et al (1990) Architectural drawings of the russian avant-garde (see in particular Krutikov’s flying cities). Editions of The Museum of Modern Art, New YorkGoogle Scholar
- 8.Bunge E (2003) Jealousy: modern architecture and flight, in cabinet, Issue 11, Flight Summer 2003, New YorkGoogle Scholar
- 9.Van der Zwaan S, Bernardino A, José Santos-Victor J (2000) Vision based station keeping and docking for an aerial blimp. IROS 2000:614–619Google Scholar
- 10.Lozano R (2007) Objets volants miniatures: modélisation et commande embarquée (ch. 2), Lavoisier, Cachan (France)Google Scholar
- 11.St-Onge D, Gosselin C, Reeves N (Voiles|Sails) (2011) A modular architecture for a fast parallel development in an international multidisciplinary project. In: Proceedings of IEEE ICAR 2011, Tallin, Estonia, pp 482–488Google Scholar
- 12.St-Onge D, Reeves N, Herald D, Kroos C, Hanafi M, Stelarc S (2011) The floating head experiment. Proceedings of HRI 2011. Lausanne, Switzerland, pp 395–396Google Scholar
- 14.Joosse M, Lohse M, Evers V (2014) Lost in proxemics: spatial behavior for cross-cultural HRI. HRI 2014:184–185Google Scholar
- 15.Kiesler T, Kiesler S (2004) My pet rock and me: an experimental exploration of the self-extension concept. Adv Consum Res 32:365–370Google Scholar
- 17.St-Onge D, Reeves N, Persson M, Sharf I (2014) Development of aerobots for satellite emulation, architecture and art. In: Proceedings of 13th international symposium on experimental robotics. Quebec Canada, pp 167–181Google Scholar