Introduction

Change and adaptability have long been architectural concerns. How might spaces and buildings relate to their physical context, as well as to changing conditions in both the environment and in their human occupants? In his influential 1964 monograph “Notes on the Synthesis of Form”, Cambridge architect and mathematician Christopher Alexander reflects on architecture’s relationship to its surroundings, and writes that “[t]he rightness of the form depends […] on the degree to which it fits the rest of the ensemble” (Alexander 1964: 17, emphasis added). Deriving insight from the work of biologist and mathematician D’Arcy Thompson, Alexander proposed that the context surrounding a building, as well as its functional requirements, could be described rationally as a diagram of forces, and suggested that these forces could be quantified and analyzed to derive a unique and “fitting” design solution. Alexander’s mathematical view of design had a strong influence on early approaches to design and computation research (March and Steadman 1971) and can be seen to foreshadow a modern emphasis on simulations and “performance-based” architectural design. Crucially, in his view, design can be understood as a problem that can be solved given adequate symbolic representations at the right level of abstraction.

The two design projects presented in this paper also explore the notion of adaptability and fit but, in contrast to Alexander in “Notes”, they do not concern themselves with finding a single fitting design solution to a given design problem, nor with the formalization of design problems in symbolic languages. Instead, they are premised on an understanding of design as a fundamentally open-ended process involving multiple contingencies—crucially those inscribed in the computational instruments employed, and those linked to the embodied and interpretive capacities of the (multiple) human actors involved. We seek a departure from a concern with mental pictures, symbolic representation and optimization that Alexander’s “Notes” helped crystalize (Fig. 1), to a concern with open-ended embodied interaction. Thus, rather than purporting to deliver optimal design solutions, these experiments use computation to explore new types of playful and conversational engagement with built forms.Footnote 1 Each experiment is introduced with a brief background, an account of its development, and initial results.

Fig. 1
figure 1

In “Notes” architect and mathematician Christopher Alexander imagines design as a series of transactions between mental images and the “actual” world mediated by formalized descriptions

The first and second authors conducted the experiments as part of their B. Arch thesis research at an architecture studio at Carnegie Mellon University, directed by the third author. The projects were framed conceptually within a reflection about technological agency and responsiveness in design and architecture, and developed through different methods for design inquiry including precedent analysis, analytical writing, diagramming and prototyping through simulations and open-source electronics. During two semesters, the concepts evolved from an initial interest in customization, automation and responsiveness toward a focus on human–machine collaboration and biometric responsiveness. The projects were enriched by the first and second authors’ mixed background in Architecture and human–computer interaction (HCI), which provided a repertoire of techniques including basic computer programming, interactive prototyping, user testing, anthropometrics, model-making and 3-D rendering. The work received crucial support from a broader group of faculty advisors and collaborators, credited in the Acknowledgements section.

As learning experiences, the experiments presented here offer a practical example of a hybrid design pedagogy combining architectural and computational methods in ways that elicit innovative learning and design exploration.

First Experiment: Multi-Modal Design Interactors

The first experiment focuses on the development of an interactive system for chair design. Architects have a long-standing fascination with chairs, which offer opportunities for exploring material, form and ergonomics. The system comprises a tangible interface we call “interactor” through which a user can shape a parametric model of a chair and automatically produce information for production. This section situates this experiment within a lineage of experiments of user-participation in design, with precedents from architecture, and computational and interaction design; it shows the initial concepts and the final prototype of the system, and summarizes users’ reactions to the system as they produced different chair designs.

User-Customization in Design: A Brief Background

Mass produced goods have historically relied on standardization to be manufactured economically at scale. This was famously illustrated by Henry Ford when he said of his Model T that, “a customer can have his car painted any color he wants as long as it’s black”. However, advocates of “mass-customization” point at recent advances in technology to announce an era of highly personalized manufacturing (Woodward 2005; Gershenfeld 2007).Footnote 2

The fields of architecture and product design have embraced the question of customization, albeit from different perspectives. For example, architects have sought to use automation in efforts to de-stabilize conventional hierarchies of the architect–client relationship, and “democratize” design. Illustrating this view, early work by Nicholas Negroponte (1970) speculated about computational tools capable of replacing the roles of architects and planners in the production of the built environment. Negroponte interrogated what he, along with others at the time, conceived as a rigid and outmoded dynamic between designers and users (for historical and critical perspectives see Cardoso Llach 2011, 2015; Vardouli 2012; Scott 2013; Steenson 2014). Product designers have also sought to expand users’ influence in design, albeit under the premise of expanding market footprints among increasingly technologically literate consumer bases. This is illustrated by examples such as Motorola’s MotoMaker (Motorola 2013) and Adida’s FutureCraft project (Adidas 2015). However, as Tim Crayton notes, “considering it’s huge significance, there has been little consideration of the implications [of mass customization] for design” (Crayton 2001).

Recent projects in the fields of product and interaction design offer insight into an expanding landscape of user-driven customization in design. For example, Greg Saul (2011) developed “Sketch chair” as a CAD-like interface allowing novice users to doodle a chair and apply simulation to test its functionality. Cheng et al. (2012) created MIT’s Jamming User Interface, a tactile display technology, reminiscent of the speculative interface presented by Ivan Sutherland in The Ultimate Display (Sutherland 1965), which enabled users to receive tactile feedback directly from a pneumatically enhanced display. Interactive Fabrication, a series of conceptual prototypes created by Karl D. D. Willis, Cheng Xu, Kuan-Ju Wu, Golan Levin, and Mark D Gross present a series of speculative interfaces which enable a more direct connection of creators to mechanisms of fabrication (Gross et al. 2011: 69–72). Other projects explore related aspects of tangible interaction design (Llamas et al. 2003; Sheng et al. 2006; Smith et al. 2008). These projects, like the work in this paper, explore new interfaces and interactions to mediate the relationship between designers and users, as well as people and machines.

Concept Development

A series of conceptual “hybrid interfaces” were developed to explore multimodal interactions combining different sensorial inputs. A premise of this stage is that such interactions might promote playful user engagement in collaborative creation processes, especially when users have mixed backgrounds and skill levels. The first of these explorations was a projection-based interface employing computer vision methods to passively detect user measurements and proportions, as well as user specified manipulations (Fig. 2).

Fig. 2
figure 2

Illustration of projection interaction

Here users would interact with a fiducial artifact, a projected digital “skeleton” of a chair—a spline curve manipulated through its control points using bodily gestures captured with a depth camera. This conceptual prototype was refined through sketches, renderings and use-case scenarios, but was not implemented. While promising, the system seemed limiting due to the over-simplification of the chair to a single line, the lack of engagement with the materiality of the final chair, and the relatively reduced “design space” enabled by the interface itself.

A second conceptual prototype consisted of an instrumented armature allowing users to directly manipulate a sensor-enabled chair prototype in order to explore design alternatives. This concept enabled an enriched physical engagement with the design process. Yet, it also resulted in a very limited design space due to the simple mapping between the physical armature and the digital model. Thus, this concept did not evolve past initial small-scale prototypes (Fig. 3).

Fig. 3
figure 3

Test rig for proxy tool (left) and accompanying digital visualization (right)

Prototype: Multi-Modal Interactors

The third and final concept sought to expand the users’ design space through a series of “interactors” capable of driving the geometry of the chairs in different directions, and at different scales—from chair parts, or assemblies of parts, to the entire chair. A ‘press interactor’ and a ‘bend interactor’ (Fig. 4) work by mapping user interactions onto a virtual model in the 3-D modeling software Rhinoceros through the Firefly and Grasshopper plugins. They were developed in a series of iterations, employing pressure and flex sensors arrayed in grids and embedded within silicone castings, beginning with a single sensor in each and adding additional sensors in subsequent iterations. While the interactors facilitated quick design manipulations, these were limited as a result of counting with only one pressure sensor. To enable richer interactions—such as twisting and torquing in the case of the bend interactor and surface deformations with more than a single control point in the case of the press interactor—more complex sensor networks needed to be developed. Thus, a new interactor geometry, with a greater sensor density, enabled more complex interactions between users and tools. The interactors were redesigned to incorporate a new sensor arrangement (Fig. 5). These prototypes were then tested and a range of different outcomes were produced by users.

Fig. 4
figure 4

Version 1 Press interactor (left), version 1 Bend interactor (right)

Fig. 5
figure 5

Exploded diagrams of press interactor (left) and bend interaction (right) showing inclusion and placement of sensors

We developed two case study chairs, each with similar base geometries, to be manipulated differently through user interactions (Fig. 6). The first, in blue, was constructed from two simple surfaces, perpendicular to each other in space, each defined by a set of four curves, and extruded perpendicular to the plane of the surface. In this example, four simple tube legs were extruded along the vertical edges of the seat plane. The second case study, in red, was also constructed from two simple extruded planes, perpendicular to each other in space. In this example, however, the initial surfaces were constructed from points rather than curves and a more complex system of legs were defined parametrically. In both examples, the base geometry was created to facilitate different modes of interaction and subsequent deformation by users.

Fig. 6
figure 6

Images of user-generated outcomes from both bend interactions (left) and press interactors (right)

Preliminary Results

The interactors were robust enough to withstand the initial tests, and allowed for meaningful (if simple) geometric manipulations. Users successfully learned the interactions through use, and developed compelling results. Besides interesting design outcomes, user comments about engagement with the prototypes were also insightful: “this is so much fun!”, “I could play with this all day”, “I wonder what else I could make with this”. It is clear, then, that not only were the prototypes deemed usable, but were also compelling to use, easily learned and un-intimidating to casual users (Fig. 7). Future steps include addressing issues with the processing speed, which tends to decrease as the geometric operations become more complex.

Fig. 7
figure 7

Interacting with the final press interactor prototype at the CMU School of Architecture thesis exhibition

Second Experiment: Biometrically-Responsive Architecture

The second experiment uses biometric data from human bodies to dynamically transform the thermal, visual, acoustic and olfactory experience of architectural space. Combining methods from architecture and interaction design, the project aims to enable new kinds of body-space interactions triggered by biometric data. By documenting the development and testing of a series of prototypes—a petal structure that compresses and expands a room, an array of conic elements that modulates light and, finally, a personal enclosure with changing sensory properties—the paper suggests an area of design research and practice we term “biometrically-responsive architecture”, linking architectural spaces and the human body in novel ways.

Unlike product and interaction design projects, such as phones or software interfaces, buildings are rarely designed with an individual “user” in mind. Rather, they are designed as stages for collective experiences. By proposing biometrics as an arena for architectural exploration, this paper outlines a human-centered approach to architectural experience that probes, through prototypes, the role of microenvironments in collective spaces and the way individual biometric data may elicit architectural responses.

This work expands beyond the traditional role of architects as designers of static forms, and suggests new approaches to imagining and experiencing the built environment. Consequently, we position biometric approaches within the context of responsive architecture research, and document a series of conceptual prototypes culminating in a full-scale installation. However, instead of reacting to environmental data, as many responsive environments do, this work uses biometric data to dynamically transform the sensory experience of a space.

While technical limitations have largely prevented technologists and architects from embracing affect as a design variable (MIT Media Lab: Affective Computing Group 2012), recent advances make it possible to incorporate biometric data as a generative space-making tool. Biometric data sensors fall into two categories: emotion-specific sensors and binary stimulation monitors. Sensors such as facial expression tracking, voice recognition and EEG brain mapping associate raw data with a specific emotion, while monitors such as pulse and galvanic skin response (GSR) provide binary data about the user’s stimulation. Although emotion-specific sensors provide more comprehensive data, this work demonstrates that stimulation-level data is satisfactory for initial testing. The decision to prioritize the system’s output over the input also influenced the sensor selection process.

The initial prototypes use pulse data to motivate dynamic spatial change. The first prototype explores how an overhead petal structure can compress or expand a space, changing occupants’ perception of intimacy in response to heart rate. A second prototype examines how animated fabric cones that control light can be biometrically modulated, also in response to the user’s pulse. A final experiment is a four-foot tall personal enclosure embedded with soft architectural capabilities. Soft architecture, characterized as the non-physical manipulation of space through environmental modalities such as light, sound, temperature etc., can create environments through sensory shifts. A companion wearable device, consisting of a GSR sensor, is also designed to collect and send physiological data to the enclosure in real-time.

Responsive Architectures: A Brief Background

Architects and technologists have long used computational systems to make environments more responsive, interactive and ‘humane’ (Negroponte 1970: 17). Some argued for technology to be integrated with architectural design, seeking participatory design strategies and “better performing, rational buildings” (Negroponte 1975: 33). Others, particularly computer scientists, imagined responsive environments equipped with “intelligent” objects capable of sending and receiving data. This is now referred to as the “Internet of Things” (Weiser 1991). In the last 20 years, while architectural systems design has focused on optimizing environmental and social efficiency, in many of these projects, architectural and spatial sensitivity are absent.

A different perspective is illustrated by the work of architects and artists such as Philip Beesley and Michael Fox, whose projects explore the experiential aspects of responsive architecture (Beesley et al. 2010; Fox and Kemp 2009). Yet architects have rarely explored soft architecture based on personal data. The Blur Building, by Diller and Scofidio, for example, uses both occupant and environmental data to drive a system of water nozzles to create a dynamic fog cloud. The cloud is a soft form that is a critical part of the architectural design even though it is not structural or permanent. The architects also designed “braincoats”, networked raincoats enabled with sensors that interact through glowing lights (Diller and Scofidio 2002). This smart “wearable” adds to the experience of the structure by providing a user-controlled artifact that acts as a wayfinding tool in the fog cloud. The Blur Building offers a provocative example of how architecture and interaction design might overlap to create a layered spatial experience and influence human behavior. Another relevant project is the Convective Museum by Philip Rahm, which uses heat to create a variable thermodynamic scape (Philip Rahm International 2008). Two poles—one hot and the other cold—create microclimates and flows within the museum, subdividing the larger public space into smaller private zones. In this way, visitors experience a dynamic space that does not require moving parts (Diller and Scofidio 2002).

These works are part of an alternate tradition exploring responsive architectures as embodied experiences rather than as instruments of optimization. Seeking alignments with, and expanding this counter-tradition, the prototypes presented here integrate biometric data and spatial response to enhance the way spaces are experienced and shaped.

Concept Development

As discussed, prototypes exploring the relationship between biometric data and dynamic architectural response were produced at various scales and with different intents. The first two prototypes display the collective mood of a public space by averaging the biometric data of inhabitants. While both prototypes interpret sensor data similarly, they explore two different tectonic approaches to creating dynamic spatial change. In the first prototype, petals unfold from the ceiling in reaction to the collective mood of the users underneath, creating an intimate microclimate. The organic movement of the petals give the space a sympathetic personality. Although this was unexpected, the affective impact of animation became a design guideline in the following prototypes (Fig. 8).

Fig. 8
figure 8

Digital renderings spatializing physical prototypes

The next prototype subdivides space using twisting fabric cones. The rate at which the cones twist and modulate the light was related to the emotional activity of the space, as measured by the inhabitants collective pulse rate. A matrix of these fabric “apertures” that are tuned to the spaces schedule and layout help transition occupants from one activity to another, while also suggesting new areas that are open for exploration.

Because the first two prototypes rely on averaging multiple users’ biometric data, they conceal the relationship between an individual user’s emotional state and the spatial output. For this reason, we shifted the scale of our final proof-of-concept to a personal enclosure—an architecture intentionally designed for one body. In this way, the final proof-of-concept considers the direct relationship between a user and their sensory experience. This scale was inspired by initial exploratory research into the history of wearables in architecture, namely Suitaloon by Archigram and Flyhead by Haus Rucker Co. The scalar reduction also challenges what it means to inhabit architecture and to be solitary.

Prototype

The third prototype consists of a four-foot tall crystalline pod structure hanging at eye level from the ceiling (Fig. 8). It contains four systems: thermal, visual, acoustic, and olfactory. The thermal and visual systems consist of fans and incandescent light bulbs that regulate the temperature and brightness of the pod. The acoustic system uses real-time audio recording and playback to create an echo that varies in loudness. Finally, the olfactory system delivers four types of scents: tea tree, clove, lemongrass, and cinnamon. These particular scents were chosen based on an olfactory classification system that suggests that each category of scent stimulates a different brain region, impacting the user’s emotional state (Kaye 2001).

The companion wearable associated with this prototype is a galvanic skin response (GSR) sensor. The GSR sensor measures the micro beads of sweat on a user’s skin to quantify stimulation. Because oxygen levels are correlated with the sensor’s reading, the sensory experiences of the prototype are related to the speed and depth of the user’s breath. Shallow, short breaths will cause the pod to brighten, raise in temperature and volume, and release stimulating scents, while longer, deeper breaths will cause the pod to darken and cool down, releasing calming scents.

The associated hardware for each sensory change has been strategically placed in a specific region of the pod, based on an analytical study of the human head and the regions of stimulation for each of our senses. For example, our visual range is 65° above and 70° below eye level, so the lights have been placed exclusively in visible areas. The final geometry was subdivided into sensory regions (Fig. 9).

Fig. 9
figure 9

Final full-scale personal pod, exterior and interior

The pod was designed to reference a crystal-like structure that encapsulates the user’s head and torso. It is formed with tessellated triangular frames connected with unique L-shaped joints. The joint system emphasizes each seam of the structure. The bulge in the pod shape and the ribbed interior panels emphasize a vertical perspective for the user (Fig. 10).

Fig. 10
figure 10

Diagram of the feedback loop in the biometrically-responsive architectural prototype

Preliminary Results

Based on the design, fabrication and testing of our three responsive prototypes, we propose four design guidelines for biometric architecture: sympathy, softness, enclosure and multiplicity. We argue that, in order to produce meaningful interactions, a biometrically-responsive architectural system must interact with its users through a sympathetic dialogue, rather than just an optimized feedback loop with the user’s physiological response. Playful and unpredictable, rather than optimized and predetermined, responses to users, help imbue the system with something resembling a personality. Both user and architecture must have agency over the responsive space that is linked but independent.

Softness, which is the second guideline, calls for creating a dynamic experience through soft architecture, rather than through moving parts, which are often too expensive and unreliable. Emphasizing softness, this work also critiques a view of architecture as unquestionably physical or reliant on form.

Perhaps in opposition to the previous guideline, this proposal also argues for a static physical enclosure, which affords users the capacity to make the explicit decision to comply in the interaction. By requiring users to crawl up into the pod, instead of walking straight in, and physically don the GSR sensor, the enclosure successfully creates a threshold that separates the microclimate from its wider context (Fig. 11). The gaps between the panels and the datum at eye level create a visual passage between exterior viewers and the interior user. The physical form reinforces the relationship between exterior and interior.

Fig. 11
figure 11

Sequence of entering the hanging pod

It is critical that biometric architecture is able to exist in both individual and collective spaces. By subdividing larger public spaces into microclimates, a biometric space affords multiplicity: the potential to create a layered scalar experience that affords many programs of use.

Conclusions

These two experiments demonstrate that integrating computational and architectural methods can open avenues for designing playful architectural interactions, while offering clues about trans-disciplinary project-based design pedagogies. On the one hand, the “multi-modal interactors” helped us investigate new design workflows emerging from the combination of tangible design interfaces and knowledge-rich digital models. As we saw, in this type of system the focus shifts from designing a one-off artifact towards defining a “design space” of possible solutions. Designers in this context become users—or perhaps ‘players’—of an interactive system where the traditional boundaries between the agency of designers and toolmakers are productively blurred. On the other hand, the “biometrically-responsive” architectural installation made visible four different conceptual threads enabled by biometric technologies embedded in space: first, a collective space that responds to a collective signal, aggregated from a plurality of individual actors; second, a personal enclosure within a public space; third, a personal enclosure within a private space, and finally a space that uses hidden delivery mechanisms to create sensory microenvironments. Testing each scale might yield both creative and critical insights. For example, speculative and critical uses of such technologies might offer alternatives to the surveillance and control infrastructures with which they are conventionally associated. The installation, which confronts a single user with a type of solitude, can elicit different types of experience in occupants—ranging from peace and meditativeness to the anxiety of imprisonment. In a museum context, such as the one proposed here, these can enact a discursive role as devices of commentary and critique.

Combined, these two projects offer new perspectives on design interaction and architectural responsiveness, as well as hybrid design pedagogy. As technologies for sensing, data collection and actuation become increasingly pervasive, new questions about the relationship between our bodies and the spaces they occupy emerge. These affect both the physical artifacts of our designs as well as the planning and production processes that lead to them. How may we approach, and shape, this landscape of technological possibility in ways that recognize both its aesthetic opportunities and its critical challenges? While confronting these challenges certainly demands further work, the speculative concepts and prototypes in this paper outline a possible future where multi-modal interactors and biometric architectures provide tangible ways of interacting with design information, as well as soft, sympathetic enclosures for individuals and collectives—ambient interfaces for human expression and performance.