Soft architectures for everyday life
- First Online:
- Cite this article as:
- Conrad, E. AI & Soc (2011) 26: 123. doi:10.1007/s00146-010-0291-5
Technologies not only change “external reality” but also change our internal consciousness, shaping the way we experience the world. As the reality of intelligent environments is upon us—ushered along with the age of ubiquitous computing—we must be careful that the ideology these technologies embody is not blindly incorporated into the environment. As disciplines, engineering and computer science make implicit assumptions about the world that conflict with traditional modes of cultural production. For example, space is commonly understood to be the void left behind when no objects are present. Unfortunately, once we see space in this way, we are unable to understand the role it plays in our everyday experience. In order to make computationally enhanced spaces that are meaningful at the level of the everyday, we must exorcise the notion of intelligence from their design and replace it with life. Henri Lefebvre’s discussions of the space of everyday life provide a framework to help conceive this transition.
KeywordsEveryday spaceRepresentational spaceUbiquitous computingHenri Lefebvre
1 Main discussion
1.1 Transforming consciousness
From Mark Weiser’s ‘Sal’ (Weiser 1991) to John Stors Hall’s ‘utility fog’ (Hall 2006), the past two decades of ubiquitous computing research have been marked by the impending reality of drastically and dramatically computationally enhanced space. However, even with a growing research community and dramatic increases in computing power with corresponding decreases in cost, this future has yet to arrive, and it does not appear to be just around the corner. One could point to the seeming youth of ubiquitous computing as a research area to explain its unfulfilled dreams. However, work on intelligent environments predates Weiser’s seminal 1991 essay, “The Computer for the 21st Century”, by at least 25 years—dating back to experiments in architecture and cybernetics in the 1960s and 1970s.
In the early days of computing, the belief that advancements in ‘thinking machines’ would unfold as quickly as other recent and ongoing technological achievements was widely held. In 1948, Alan Turing believed that a “sure” way of producing a thinking machine would be to substitute each part of a human with the corresponding electronic machinery—television cameras for eyes, microphones for ears, etc.—and allow it to “roam the countryside” so that it “should have a chance of finding things out for itself” (Turing 1948). The promise of intelligent machines led to speculation among architects about the exciting prospects of designing intelligent environments. This enthusiasm dwindled as the potential of artificial intelligence proved overstated due to misconceptions about (1) what is possible to achieve with computers and (2) the nature of intelligence itself. Curiously, even though ideas about what a computer is and what a computer can do are very different from what they were over 40 years ago, when it comes to ideas about intelligent environments, current notions bear a striking resemblance to the earliest speculations. Although the computing machinery now available is an order of magnitude more powerful than that of the initial dreams and experiments, current applications lack the magic inherent in those earliest proposals. Projects such as Superstudio’s Supersurface—a physically ubiquitous information/utilities grid spread across the landscape that would sustain a contemporary nomadism—or Archigram’s Walking City—literally a city that could sprout legs and walk about—reflected a utopian ideal in new technologies, or at least the excitement of possibilities limited only by the imagination.
1.2 Toward the age of ubiquitous computing
Early development in information technology followed the legacy of industrial interface design. In the early twentieth century, as automated machines replaced humans in the workplace, the design goal was to eliminate participation wherever possible. In this industrial context, machines were generally dangerous to human bodies, and minimizing their interaction was an issue of safety. The momentum of this design strategy has proven difficult to overcome as consideration for the user has lagged behind their need to interact with computers. As information technology becomes part of the social infrastructure, it demands design consideration from a broad range of disciplines. It can be argued that appropriateness now surpasses performance in importance in technological design. “Appropriateness is almost always a matter of context. We understand our better contexts as places as architecture” (McCullough 2004).
Somewhat appropriately, ‘context’ is a popular topic in current ubiquitous or pervasive computing research (Abowd and Mynatt 2000). Most early papers, and even some recent ones, make a point of saying that context is more than just location. The information included as context changes from researcher to researcher, but a couple of typical variables are time, identity, identity of others, etc. Location is often described as a set of Euclidean coordinates (e.g. <x,y,z>), a room, a building or increasingly, a latitude and longitude if using GPS. The overwhelming majority of these research environments are designed for work settings and are focused on applications such as “How can we tell when a meeting is taking place?” so it can presumably be recorded. Context here is the information that can be extracted from the environment with current sensor technology combined with the computational system’s ability to match that information to a pattern.
How does the computer participate in the world it represents (Dourish 2001)? This question illustrates the design challenge that results from the conflict between the “(quintessential) product of engineering” (Penny 1997) and all of the “spaces” that it inhabits. Computation is a fundamentally representational medium, and as the ways in which we interact with computers expand, so does the importance of attention paid to the duality of representation and participation. The focus of this attention, and the place where this conflict is potentially best solved, is at the interface, the point or area in which the person and computer come into contact.
Although computing’s realization of the importance of the social space in which computers participate is a step forward, it is primarily focused on work environments. Social and spatial interactions as they relate to the production of capital are important, not the implications of technology on the everyday. However, computing has become part of the ambient, social, and local provisions for everyday life, and as such it becomes important to look at the larger impact of computation on culture. Computing has revolutionized almost every discipline and is continually increasing its presence in day-to-day life. However, it reifies an ideology that subordinates the experience of the physical.
In order to enact the power of computational systems, one must first recast the problem into a format that it is able to parse. Computers process information. In order to work with anything that would normally fall outside of our commonsense notion of information, it must undergo an informatic conversion. Some problems translate well, but others that we are less able to articulate do not.
1.3 Serial thinking
These research agendas make assumptions about what is possible for a computer to know and do in the world, reflecting a tacit belief in what it is “to know”. Dreyfus expounded the inherent philosophical problems regarding the possibility of intelligence as early as 1965, and AI research has done an excellent job of pointing out what intelligence is not. Proving a mathematical theorem and playing an excellent game of chess were at times thought to be activities that represented the pinnacle of human intelligence. Researchers were unsurprisingly optimistic when it turned out that these tasks were well within the means of the technology. However, by the 1980s, it was realized that tasks that human beings take for granted as being simple, like recognizing a face, or crossing a street, proved to be very difficult computationally. Ultimately, it proved to be easy to implement systems that reflected adult-level reasoning skills, while difficult to impossible to implement the perceptual-motor skills of even a cockroach. Dreyfus (1992) articulates that the difference between “knowledge-that” and “knowledge-how” is non-trivial in determining whether or not computers are capable of human intelligence. “Knowledge-that”, or propositional knowledge, can be implemented by a logical, rule-based system. A system can contain all of the facts of biology and rules of the physics of light yet not know how to see. “Knowledge-how” is derived from experience and can be difficult to impossible to communicate with words alone.
The nuts and bolts of making complex intelligent machines represent only half of the philosophical problem underlying the design of intelligent environments. The other half of the problem is that if intelligence is information processing, then humans are just more complicated information processing machines. In the development of ubiquitous computing systems, what intelligence means expresses itself as being for whom we imagine the space being designed.
The conventional computer interface—display(s), keyboard, mouse—illustrates the underlying assumptions about the world that are present in its design. These assumptions are in conflict with what we commonly understand as everyday experience. First, the computer interface only engages the body in a very limited way. More importantly, the underlying metaphor for interaction here is vaguely that of a dialogue. The “user” and “computer” are engaged in an input/output dialogue. Ignoring for the moment that this metaphor is potentially not very useful when applied to a space rather than an object, the style of ‘dialogue’ as enacted by the computer interface is a rather limited one. It is serial in nature, and information can go only one way at a time. The serial nature of the interface is exemplified by the fact that in the model of the user in the interaction, the focus can only be on one exact location at any one point in time—the location of the mouse. This seemingly inconsequential interface detail is emblematic of the contradiction between the world as it is represented by traditional computer hardware and the world as lived. It is also emblematic of a world in which there is no possibility of parallel interaction. The implicit serial model of interaction is revealed to be lacking when applied to more complex problems such as vision. Modeling human vision with a computer “brain”, video camera “eye”, and serial logic proved much more difficult than originally expected. Part of the problem was the belief that the human brain operated like a computer, and that visual perception was a serial process. The model was as follows: (1) photons leave light source, (2) they are reflected off of an object, (3) they enter the eye through the lens and strike the retina, (4) the brain creates a mental image based on this sensory data, and (5) it matches patterns in the image to patterns in the memory to identify objects. The problems with this simplified model are many, but for our purposes, the most important is that the signals in our visual system do not operate in such a simple manner. The signals from sensory stimuli that represent visual information are actually massively parallel in nature, with origins much more diverse than the retina alone. For example, information from the muscles that position the eyeball, as well as other posture information, greatly contributes to what we experience as the visual field. Additionally, there are numerous feedback and feedforward effects present in the visual system, for which a serial model is unable to account. This model of interaction does not reflect the reality of experience or biology between human and environment.
In many intelligent environment or ubiquitous computing systems, this diagram changes, albeit slightly. One configuration is essentially identical, with the exception that the room, or building, is the interface and the user is inside it. Like the desktop computer, it relies on the system being fast enough to provide the illusion of carrying out multiple processes simultaneously. Another model consists of many small, serial machines embedded in the environment that may or may not be connected via a network. These architectures are still founded on the serial processing of a world that is information, and may be incompatible with the reality of thinking and living in the complex world of relationships in which we are embedded.
1.4 The everyday environment
I propose that despite a firm grasp of what computers can do, approaches to intelligent environments are still wrought with the misconception that their goal should be to create intelligent environments. The problem here is grafting a computing problem onto a spatial problem. Designing spaces is a problem of creating experience, not intelligence. Experience, much like space, is thick—it is not merely a void when not filled with prefabricated or preexisting objects of attention. To thoughtfully (critically) embed computational media into the environment, we need an understanding of the environment that does not reduce it to a meaningless void (or the information that a computer can extract from it).
A useful way to approach these problems is through Lefebvre’s notion of the everyday. The everyday is the glue that holds what we normally consider our experience together. If the memorable events of one’s life are blades of grass, then the everyday is the dirt that holds it together (Sha 2005). Through Lefebvre, the everyday becomes a spatial condition, one that breaks down nicely into categories that allow us to dissect how computing fits into space and ultimately how its troublesome ideological foundations can be reconciled with our experience of everyday space.
The problem of designing intelligent environments is in some ways an old problem of overlapping the abstract and the concrete. Instead of context, a person’s environment can be understood as their everyday spatial condition. Through the lens of the everyday, we can see that the solution is not intelligence, but life. Life becomes the goal of design, life as both experience and the creation of experience.
Henri Lefebvre’s dissection and discussion of space provide a rich framework to contemplate the intersection and overlap of architectural and computer-mediated spaces. Lefebvre believes that any attempts to understand the contemporary world that ignore spatial considerations are both partial and incomplete. The meanings that we attribute to space are inextricably bound with our understanding of the world in which we live. Our basic understanding of the world originates from the sensory spatial relationship between our body and the world. Conversely, the computer is a product of “a nineteenth and early twentieth century scientized approach to the world: that mind is separable from body; that it is possible to understand a system by reducing it to its components and studying these components in isolation (that the whole is no more than the sum of its parts); that the behavior of complex systems can be predicted” (Lefebvre 1974).
While useful, I would like to suggest that this is not necessarily the world in which we would like to live. If we combine the field of computing with a different set of underlying assumptions, we may be able to create a world that is richer.
Our understanding of space is directly related to our understanding of the space of our body, which has long been sundered in Western culture by the Cartesian duality. If we do not accept this separation, what is the resultant space? This new understanding can change the ways in which we live and imagine the present—including how we can use computational media as a ‘tool for thinking’ in the precipitant space.
1.5 A new sense of space for computing
Lefebvre confronts considerations of space that reside “comfortably enough within the terms of mental (and therefore neo-Kantian or neo-Cartesian) space”. His central claim, that space is a social product, directly challenges the predominant “idea that empty space is prior to whatever ends up filling it”. Lefebvre’s re-conceptualization of space is, at least partially, related to his conception of the body and its place in Western culture: “Western philosophy has betrayed the body; it has actively participated in the great process of metaphorization that has abandoned the body; and it has denied the body” (ibid.).
Lefebvre describes the body, as he does many things, in the form of a triad: perceived—conceived—lived. Introducing a third term into the equation already destabilizes any notions of Cartesian duality. The body, as simultaneous subject and object, “cannot tolerate such conceptual division” (Lefebvre 1974) and can be liberated through a production of space. This occurs, in part, through the distinction between physical, social, and mental space. Lefebvre states: “Social space will be revealed in its particularity to the extent that it ceases to be indistinguishable from mental space (as defined by philosophers and mathematicians) on the one hand, and physical space (as defined by practico-sensory activity and the perception of ‘nature’) on the other” (Lefebvre 1974).
All interactions with computer systems are at some level a social activity. Computation can be both a tool of and structuring force behind the relationships between people, institutions and practice. Even if one uses a computer in isolation, there is a social interaction present between the user of the system and the designer of the system. A user only knows how to use a computer system through a shared set of social expectations. Empty space thickens when mixed with information, making space itself an interface, and thus part of social space.
Spatial practice, which embraces production and reproduction, and the particular locations and spatial sets characteristic of each social formation. Spatial practice ensures continuity and some degree of cohesion. In terms of social space, and of each member of a given society’s relationship to that space, this cohesion implies a guaranteed level of competence and a specific level of performance.
Representations of space, which are tied to the relations of production and to the ‘order’ which those relations impose, and hence to knowledge, to signs, to codes, and to ‘frontal’ relations.
Representational spaces, embodying complex symbolisms, sometimes coded, sometimes not, linked to the clandestine or underground side of social life, as also to art (which may come eventually to be defined less as a code of space than as a code of representational spaces) (Lefebvre 1974).
Spatial practice is closely related to perceived space. It is the space secreted by society, recursively reifying it. The perceived can be thought of as falling between that which is sensed, the raw experience, and what we believe, or the conceived. Beliefs and expectations alter or condition perception, which we then take as given. This notion is exemplified in experiments in change blindness. A simple predisposition, such as trying to count the number of passes a group makes with a basketball, can make one completely oblivious to a blatant dynamic event, such as a person in a gorilla suit dancing among the players (Simons and Chabris 1999). For Lefebvre, the perceived of everyday space falls between daily routine and the infrastructure that allows it—the actual routes and networks that organize the daily routine. Ultimately, it is in spatial practice that the effects of ubiquitous or pervasive computing design will be felt and internalized. Computing is part of the infrastructure that organizes daily life.
Representations of space refer to conceived space. It is the space of scientists, architects, urban planners and all who privilege the cognitive over the perceptual or lived. It is the dominant space in our society, and it is the space of contemporary visual and computing cultures. It is a mental space separated from physical space, or abstract space imposed on concrete space.
Representational space corresponds to lived space. This is where meaning resides. It is “directly lived through its associated images and symbols”. It is the passively experienced space that overlays physical space, which the imagination is able to change and appropriate. Representational spaces “tend toward more or less coherent systems of non-verbal symbols and signs”. In trying to infuse spaces with life through computational media, the goal is to move the design of computing systems from representations of space to representational space, from conceived to lived space.
These spaces are not always clearly differentiable; they overlap and intermingle in varying intensities. Lefebvre states that in order to understand these three moments of social space, one can map them to the body. The spatial terms (spatial practice, representations of space, representational space) are analogous to the bodily triad of perceived—conceived—lived.
How do we use computational media to make meaningful spaces? The tacit philosophy of the machine may be ill-equipped for this task. The problem is one of perceived divides between mind and body, body–environment, and environment–mind. If we choose to keep these categories, Lefebvre’s articulation of the everyday can provide a framework for their reconciliation. Ultimately, it is at the level of the everyday, the lived, in which we wish to make a difference.
Lefebvre dissects the everyday spatially, into physical space, mental space, and social space. Social space is special in this breakdown, since it operates between and overlaps with both the physical and the mental. Social space is necessarily physical in that it is comprised of individuals. It is also necessarily mental, or abstract, in that part of its manifestation is non-physical. Lefebvre further breaks down social space into yet another triad that somewhat reflects his original structure of the everyday. Social space can be dissected into the triad of spatial practice, representations of space and representational space—representational space being a space of meaning.
Lefebvre describes the body in the triad of perceived, conceived, and lived. Here, the lived is the special term, reconciling that which is perceived with that which is conceived. By aligning Lefebvre’s various triads, a pattern emerges where everyday life, social space, and meaning correspond to the lived body.
In this diagram, the chasm between physical reality and abstract thought is filled with life. In the pursuit of creating environments that are meaningful for everyday life, an emphasis solely on intelligence results in an imbalance. With this methodology, the pursuit of design is not intelligent environments but a space alive.