The term simulation encompasses the imitation of a process or thing, the act of pretending, and the creation of a computer model, often in order to study an object or a situation (Oxford Dictionary 2019). With the advent of computation, simulation, used in the natural sciences to record, collect, organise, and visualise data, has become a vital tool in the design process (Gleiniger & Vrachliotis 2008). In architecture, ‘simulation’ refers to a process of imitative representation in the visualisation of buildings. These visualisations serve as a surrogate for the built work, reflecting the separation between design and construction. In fact, as architectural historian Alberto Pérez-Gómez posits, since the beginning of Western history architects have not actually ‘made’ buildings but constructed the mediating artefacts that make buildings possible (2005). These artefacts range from texts, inscriptions, and drawings, to models and full-scale mock-ups, the nature of which has changed throughout history (ibid.: 218). However, the term simulation itself is not often used in the context of design and construction. Architects speak instead of ‘modelling a building’ to describe the way they handle multiple representations and coordinate design across a multidisciplinary team (Loukissas 2009: 154). There is a projective element to modelling a building through graphic representations like plan, section, and elevation, in the sense that this process gives form to the unknown or not-yet conceptualised. Graphics and three-dimensional models are thus a key part of the form-finding process, acting as a bridge between the germinal concept and the realised work (Evans 1995).

Although the design process has always involved a negotiation between different media, whether drawings, models, or computer-aided design programs, a certain distance persists between the architect and the drawing, as well as between drawing and building as practices (ibid.). To approximate a form, it is vital to mediate between different methodologies, because doing so accommodates fundamental ambiguities and allows a design to evolve according to the ‘agency’ of each medium, substantiating the design further (Wittmann 2018). With the increased shift towards computational design at the turn of the century it is precisely this projective distance that some advocates of digital modelling want to collapse (Keller 2012: 119–120). Building Information Modeling (BIM): a method for the networked planning and construction of buildings using software, has heralded a significant change in design thinking and building by furnishing the entire design and construction team with the ability to coordinate the building digitally prior to construction (Garber 2014). BIM technology uses a computational model to bring together all the data necessary to efficiently design, build, and manage a building. It is both a simulation and a process that facilitates the informational exchange and interoperability required for building (Eastman et al. 2011). As Keller remarks, “a fantasy (perhaps a nightmare) of the total computational model, one that would possess all of the geometric, material, thermo-dynamic, lighting, acoustic, legal, and economic data for a project and its context” has existed since software tools made their way into architectural practice in the 1960s (2012: 119).Footnote 1 The idea that computational modelling is a type of simulation can therefore be historically contextualised in relation to technological conditions, in particular, the development of computer-aided design (CAD) and computation in the post-war period. Yet as architectural historian Mario Carpo has noted, there is no uniform or linear history of BIM (Carpo 2017: 213), nor a consensus about its definition, rather, several overlapping factors have contributed to the progression of BIM technologies.

This article will situate the development of BIM in a historic context. By bringing together historical sources, oral testimonies, and case studies of actual buildings, I hope to shed light on some of the circumstances and discourses that contributed to the expansion of building modelling. Looking at the role of computer simulation and modelling as an epistemic tool, I will examine the extent to which BIM technologies are part of transdisciplinary discourses, and have impacted and were impacted by architectural thinking and practice. The argument opens with some background on architectural drawing and projective techniques and how they relate to BIM technologies. The first section then traces drafting traditions in architecture and discusses how they were questioned by pre-BIM explorations, for example Charles Eastman’s proposal for a ‘Building Description System,’ and Aish’s work on the concept of Building Modelling and innovation of RUCAPS (the Really Universal Computer Aided Production System)—one of the first Building Modelling applications. The second considers the history of CAD during the post-war period, with a particular focus on the development of graphic user interfaces, curve design, and data structures by researchers working in Europe and the US. The third and final section demonstrates how the architectural firm Gehry Partners has advanced BIM through their use of the software CATIA, and will address more generally the possibilities and limitations of using BIM technologies for the realisation of both standard and iconic building projects.

Historic Drawing and Projective Techniques

Writing about the evolution of architectural visualisation, architect and historian Robin Evans points to the function of drawings as a medium of design that allows the architect to think spatially, while also communicating information about a building’s construction. The Renaissance-era architect was seen chiefly as an artistic creator whose duties centred around the production of architectural drawings (Evans 1995), which include sketches, presentation drawings, and work drawings, which are studied during the design process and guide construction (Linfert 1931). The architectural drawing and its modes of display function as a system for mediating and representing three-dimensional structures on a two-dimensional surface, for instance through techniques of plan, elevation, and section. Crucial to this process is the capacity to connect these three types of drawing through parallel projection. One of the earliest examples of traditional drawing methods for representing a three-dimensional object in two dimensions comes from a correspondence between the painter Raphael and Pope Leo X (Evans 1995: 113). In Raphael’s letter, the Renaissance master praises the ability of parallel projection to provide an undistorted representation of a building on a reduced scale by connecting the corresponding parts of the drawing with parallel lines that extend from the ground plan to the elevation.Footnote 2 Through parallel projection, information captured in two dimensions can be assembled and then transformed into three-dimensional space. The lines of parallel projection, which guarantee true scale, convey the passage of the space outside the drawing surface into the drawing (Evans 1995).

Another technique that similarly facilitates entrance into three-dimensional space is perspective drawing. Erwin Panofsky, in his essay “Die Perspektive als ‘symbolische Form’”, references Albrecht Dürer, who explains that “‘Perspective’ is a Latin word, meaning ‘looking through’” while “creating a unity of space” (1996: 27). Panofsky, adopting Dürer’s view, states that “perspective” enables a viewing of space (Raumanschauung) where not only are individual objects shortened but the whole picture is transformed into a “window” through which we have the feeling of looking into a space (Panofsky ibid.).Footnote 3 Unlike parallel projection, this method does not accurately display measured distances and their relationship to each other. That is why, Raphael reasons in his letter, perspective representation is suited to painting and parallel projection to architecture. As Evans describes, “orthographic projections are more commonly encountered on the way to buildings, while perspectives are more commonly encountered coming from buildings” (Evans 1989: 21, emphasis in the original). Projection techniques are essential for the imagination and construction of three-dimensional spaces, playing a role both in the design process and the actual construction of buildings, because drawings act as storers and transmitters of data. Drawings not only carry information about a spatial composition, however, but also about material and other specifications, added in numerical or tabular format. Projective techniques are inextricably linked to modes of architectural thought and production (Evans 1995). Because drawing methods shape the act of design itself, the selected drawing medium is not just a means of representation, but a design epistemology, informing what is designed and vice versa.

The search for techniques of representation has been a constant within architectural history, and occurs in tandem with changing modes of architectural design thinking, as well as cultural and technological shifts. Spurred on by advancing digital technologies, software developers and architects in the post-war period began to search for a descriptive system that would make the visualisation of a building and its parts more tangible, while bringing design and construction, and indeed the entire building industry, into closer alignment. This led some to suggest that computational systems would replace the hand-drafted drawings that had been the dominant medium for communicating design between architect and builder since the Renaissance.

Drafting Traditions and BIM

Early Explorations of Building Description Systems

A key figure in the development of a computational Building Description System was architect Chuck Eastman, whose research in the 1970s attempted to bridge the divide between architecture and software.Footnote 4 Though acknowledging the essential role of drawing as a decision-making tool within the design process, and for coordinating and communicating with clients and contractors, Eastman voiced his dissatisfaction with the status-quo system of representation through plan, section, and elevation. Eastman valued three-dimensional models for building and construction over two-dimensional working drawings, which the user then has to assemble in their mind in order to arrive at an understanding of the three-dimensional space (with physical models on hand to facilitate an understanding of the spatiality of the design). Pointing to the disadvantages of drawings over models, Eastman questioned their prevalence in architectural practice but listed the ease of updating drawings during the design process, as well as the possibility of reproducing drawings, as reasons for their historic primacy (Eastman 1975).

Eastman’s concept for a Building Description System was therefore driven by a desire to replace drawings as the main carrier of information about buildings, and to introduce instead computational systems that could store, manipulate, and visualise the information related to a building (Eastman et al. 1974). Based on the assumption that “A building can be conceived […] as a collection of three-dimensional elements arranged in space,” Eastman and his team worked to establish a prototype for the computer-based description of a building backed by a database (Eastman 1975: 46). The database of this Building Description System enabled geometric and spatial inscriptions of a large number of components to be arranged spatially while remaining linked to one another, comparable to an actual building. Eastman championed software programs that could facilitate “the description of a very large number of elements—on the order of hundreds of thousands” (ibid.: 47),Footnote 5 so that a computer could create a detailed representation of a building with three-dimensional elements collected, stored, and arranged in space. Designing then consists of interactively defining elements according to their shape and other properties and assembling them, much as if working with a wooden model. Additionally, the Building Description System would make numerical information about the composition available for future analysis (ibid.). Born out of something analogue (the physical model), a system was thus created to enable parameters and interdependencies to be calculated digitally.

Data-structures and BIM in the 1980s

Tackling questions of how to design a computational-based system within architecture, in the 1980s designer and engineer Robert Aish was involved in the creation of one of the first Building Modelling applications, a ‘proto-BIM’ called RUCAPS, which had been in development since the late 1970s at Gollins Melvin Ward Architects in London.Footnote 6 Picking up on Eastman’s Building Description System, with RUCAPS a building could be modelled as an assemblage of components, linked through associative or relational means.Footnote 7 This integrated CAD system was capable of handling multiple representations, which correlated to a 3D model (Aish 1986). In fact, Aish’s description of ‘building modelling’, published in a paper in 1986, was “the first documented use of the term Building Modelling in the English language”, and “set out the arguments for what we now call BIM and the technology to implement it” (Laiserin 2008: xii). Prerequisite for modelling with RUCAPS was the construction of a 3D model of the building, from which 2D drawings could be extracted. The software, programmed for architects, was based on a clear separation between the data that determines the fundamental structure of a building and reports that can be extracted from the digital model. One constraint of this model, though, is that once drawings are derived from it, they cannot be edited. Changes have instead to be performed on the model, and drawings subsequently re-done to ensure consistency. Nevertheless, the components in the defining 3D building model are time-stamped so that technicians can model different sequences in the evolution of the design. The temporal phasing of building is a key feature RUCAPS introduced. Another notable characteristic of this technology is that the 3D building model can be shared among multiple people, enabling real time multi-user collaboration. Additionally, the building model can be interfaced to relevant databases (for example, to convey building material properties or climatic conditions that will impact the building over time). All these factors turned the building model into an infrastructure through which various design participants could not only exchange information about a design but also evaluate its interdependencies.

Recognition of RUCAPS’ capabilities was slow to reach mainstream practice, however. When Heathrow Airport’s Terminal 5 opened in 2008, RUCAPS, used by designers Richard Rogers Partnership and engineers Mott MacDonald and Arup for planning and construction, had finally become a popular choice, but in fact, as Aish recounts, RUCAPS had already been employed during the refurbishment of the airport’s Terminal 3 twenty years earlier. The delayed uptake of Building Modelling applications can be linked to the significant decrease in computer hardware costs and increase in performance at the turn of the millennium. The costs of running RUCAPS software in the 1980s were significantly greater than the cost of the hardware needed to operate 2D drafting systems. By the time Revit, a building information program acquired by the software corporation Autodesk in 2002, was available, the cost/performance ratio of hardware had fallen so drastically that there was hardly any difference between the cost of running a 2D drafting application or a BIM one. In addition, thanks to policy mandates, developers have been convinced that construction and operating costs could be lower if buildings are represented as information models that not only include 3D geometry but also attribute data (Aish 2017: 68). In general though, as architect Richard Garber stresses, the construction industry has been “slow to adapt to change, […] and the changes they allowed in building construction, are still novel” (2014: 23). Consequently, the popularisation of BIM as a site for simulation in the architecture and building industry is still a contentious issue, one that raises questions about the nature of production itself.

Among the criticisms of BIM is that it requires a digital model to be designed upfront, during the initial conceptualisation process, to study the behaviour of the building. Early on parameters are set that in turn determine future project developments. Aerospace engineer Rick Smith is one sceptic of BIM technologies and the need for upfront planning, describing BIM as a fiddly approach that can be constraining because of the need to following programming logics; debugging and testing multiple possible scenarios before perhaps finding “that you need to adjust the program or begin programming all over again because you have taken the wrong approach” (2007: 2). Furthermore, the implementation of BIM technologies is driven by the standardisation of design and production chains, with planning procedures now based on ready-made information and standards that used to be fixed in drawings or in writing integrated into software codes in the hope that bundling this information into one model can increase efficiency. Some fear, however, that this will also accelerate standardisation in architecture itself. Measures to resist this are ongoing, as illustrated by the Industry Foundation Classes file format (IFC)—an attempt to create an open industry standard that itself originated in a project initiated by the International Standards Organization in 1985: the Standards for the Exchange of Product Data (Cardoso-Llach 2017: 32). As Aish argues, it should be architects who test the limits of tools, not the other way around (2011: 27). If the tools available constrain architectural practice, new ones must be created.

Conceiving of the architectural design process as an information flow system that should make the different steps of the design process and its transformation of data visible, Aish, alongside others in the Smart Geometry group,Footnote 8 developed the parametric form-generating software GenerativeComponents in 2003. The platform provides an editable description of the design process, permitting the user to go back and identify selected steps and make modifications over time with the possibility of re-editing them. Through GenerativeComponents, architects’ actions are archived and the record of a design is no longer hidden from the user. Components can be re-worked, with the alteration or input of one parameter then influencing the configuration of the whole design. Aish compares the modelling process to a “live graph-based system” that enables an exploration of numerous design options, “which would be practically impossible with manual modelling techniques associated with conventional BIM applications” (2017: 69). This signalled a marked shift in the conceptual approach to design, underpinned by a mathematical model capable of producing a much more nuanced simulation of a building. Accordingly, building modelling systems are characterised by a strong focus on process and the logic of the process—which the architect must identify, along with any dependencies between input parameters and associated information from real-time databases. Importantly, computational design tools provide a way for the architect to externalise design rules and relationships as a script or graph, and manage the execution of that script in order to generate a building description.

According to Aish, this approach then demands a different knowledge set to that expected in the past. Architects must be proficient in writing scripts and creating graphs, something that requires some affinity for algorithmic thinking. Instead of using a set of representational systems (sketches, drawings, models) to visualise the forecasted building, the architect must distil the design idea into a sequence of discrete operations (Aish 2017). There is a tendency within software design, however, to package fundamental geometric and computational operations into intuitive commands such as move, copy, and scale, hiding them from users in a benevolent but perhaps misguided effort to protect them from complex mathematics. Aish insists that architects nonetheless must acquire the fundamental knowledge of algebra and geometry on which programming and computer graphics are based. The importance of weighing up how a selected digital tool might work with a design hints at the fact that the design process involves an assemblage of tools that possess their own kind of agency, an agency that works in tandem with the designer’s. Users must therefore consider whether it makes sense to use computational design on a project, as it is likely only practicable if underlying design rules are worth abstracting into a script or graph and where there is some anticipated value in executing such scripts and graphs multiple times in order to explore design options. Also, the effort of creating a script or graph should be weighed against the resulting design exploration, because a design system like this is only expedient when underlying rules are detectable (Aish 2017).

Parametricism and Building

Within the field of parametricism—a system in which all elements of a design are interlinked and an outside influence that changes one alters the others—BIM is part of an effort to merge software and construction protocols and establish computational-based visualisation and modelling techniques across architecture and associated industries (Poole & Shvartzberg 2015).Footnote 9 Parametric models were first pioneered in the shipbuilding industry, with Intergraph Corporation, a software developer founded in 1969, a key player in the sector.Footnote 10 Aish was one of their onsite team at the Gdansk shipyard in Poland, helping naval architects to adopt the system. He recalls that when he and his colleagues arrived in Gdansk they were handed the shipyard’s parametric design manual, which made it clear that “the naval architects had already been conceptualizing their design and fabrication process as a rule-based parametric system long before the arrival of parametric software” (Aish 2017: 70). What we can glean from this anecdote is that parametric principles coded into software are also inscribed within traditional techniques, and that material approaches from various eras and industries can converge to create strategies that are innovative but not entirely unprecedented.

As parametric methods were integrated into software development, the simulation of building processes using parametric software inspired a rediscovery and reinterpretation of historic architecture through the perspective of parametricism as well. While Aish was working with Intergraph’s software team in Paris, he was visited by architect Mark Burry, who since 1979 had been attempting to rediscover the design principles in Gaudi’s surviving plaster models of the Sagrada Familia in order to complete the unfinished building. Following discussions with Aish, Burry chose to reimagine Gaudi’s underlying design principles as a series of parametric and geometric relationships.Footnote 11 For example, examining the geometry inscribed in one of the plaster models of the cupola—through a parametric system—helped Burry create the construction geometry for the yet-to-be-built cupolas and towers (Burry 2011: 109).

Parametric principles are further evident in a number of prominent projects in the late twentieth and early twenty-first centuries, including the roof designed by Grimshaw Architects for Waterloo International Railway station in 1993, which recovered or reverse-engineered an underlying geometric relationship from a series of arched models. The roof of the Waterloo building features a number of arches constructed as banana trusses, which share a set of design rules based on tangential arches. All the arches possess a certain commonality, while being different from one another (Aish 2017). However, the parametric basis for the design was conceived by architects without parametric computation and prior to the availability of this technology. Only afterwards was the design reinterpreted using parametric software. As Aish explains, “with the underlying geometric relationship recovered, it was then possible to rebuild the station roof as the architects had originally conceived it: as variations on a common geometric theme” (2017). What is interesting in this case is that the vision for the parametric building came from architects, not from commercial software developers responsible for creating architectural applications. What unites the aforementioned examples—the design and fabrication process at the Gdansk shipyard, the Sagrada Familia, and the roof at Waterloo International—is that they made use of underlying geometric relationships and parametric principles before the availability of parametric software. However, the use of parametric techniques on these projects was grounded not only in technical factors but cultural ones as well.

CAD and the Development of Graphic Interfaces and Surface Modelling in the Post-war Period

Interface Technologies

Examining a building with Building Modelling technology requires a designer to use sophisticated computer graphics tools (Garber 2014: 14). Key to the development of Building Modelling Systems was therefore the question of how to interact with a computational model and its underlying mathematical structure. Ivan Sutherland’s research into depicting the shape of objects in visually realistic terms was foundational to the progress of computer graphics capabilities. In 1963, Sutherland, a student of Claude Shannon, Marvin Minsky and Steven Coons at the Massachusetts Institute of Technology (MIT), developed the SketchPad system, one of the first software programs to feature a graphic user interface that let users interact with the design on a computer screen. When Sutherland presented SketchPad at the Spring Joint Computer Conference in the same year, he screened a video to demonstrate the capabilities of the software, showing the audience how using a light-pen on the computer screen, and a keyboard to type in commands, enabled an object to be sketched, moved, enlarged, and rotated (Bruegmann 1989: 140). For the user, then, drawing was connected to information input, no longer entailing marks on paper but rather entries on a keyboard. What distinguishes digital visualisations from analogue ones, therefore, is their focus on an interface that establishes a relationship between the user and the underlying database. Through the interface, the database can generate different visualisations and user experiences. In this vein, computer artist Frieder Nake speaks of a surface and a subface, beneath which information is stored, ready to be translated into various visualisations. Nake surmises that the digital image can only be understood if we consider it to be a coupling between a visible surface and a manipulable subface (2008). Concepts of surface and subface were not only central precursors to contemporary practices of BIM technologies, they also provided the conceptual foundation for the epistemology of the image in the age of computer simulation, heralding a change in modelling processes (Cardoso-Llach 2017).

As research into the manipulation of shapes and their fabrication became increasingly important in the field of software development, so too did questions about human-machine interaction, the graphics displayed on the screen, and the mathematics used to calculate them. Already during the post-war period, scholars at MIT were studying computation and form modelling in the context of military research. Steven Coons, a mechanical engineer and early pioneer of computational methods who was working on the concept of a CAD system in MIT’s Computer Applications Group, speculated about a more direct way to use the computer from the conception of a design through to production (1963). These considerations were based on the idea that humans and machines could be joined in a “cooperative complex, a combination that would use the creative and imaginative powers of the man and the analytical and computational powers of the machine each with the greatest possible economy and efficiency” (Coons 1963: 230). Subsequently, Coons envisaged interactive computer graphics as a tool that would aid the designer and also create an intimate relationship between humans and machines: the computer and the architect were conceived of as a complementary and collaborative unit with the computer taking over the analytical and computational steps of the design, freeing the architect to focus on creative tasks (1966: 9). Conceiving of design as something resulting from a combination of intuitive drafts done by the designer and rationalised computer-generated drawings, for Coons the computer helped a hand-drawn sketch that expressed a “nebulous” concept to become more concrete (ibid.). Coons stressed the potential of this feedback loop between the designer and computer as computational analyses and evaluations triggered changes in the concept and led, through various iterations, to a final design (ibid.). While the architect would sketch on the computer screen, the computer would refine the sketch into a digital drawing, simultaneously performing various numerical analyses, such as structural strength, clearances of adjacent elements, and other tests (Coons 1963: 300). These optimisation processes were carried out automatically by the computer, and the designer could respond if need be (ibid.).

In creating a computer-aided design system, Coons thought it necessary that information about form be presented graphically: that shapes or elements of a drawing could be modified in new ways because the graphic faculties would extend the drawing system “with a freedom and precision far surpassing what is possible with pencil and paper” (ibid.: 301). The software his team worked on therefore generated input devices that became new architectural tools, such as light pens, tablets, and screens. These tools and graphic interfaces had to be paired with a computer that could perform all of the mathematical analyses necessary for the design process, and store a catalogue of information about standard parts, materials, and processes (ibid.). Coons pictured two different approaches for achieving these aims. One would embed in the computer a large set of special-purpose processes, each designed to perform a specific task. This approach would reach its limit however when a design went beyond the system, requiring a capability not embedded in the software. So Coons proposed a second strategy, “in which the large population of special purpose routines is replaced by a few (perhaps indeed only one) routines” that would let the designer modify the system by employing individual language forms, including the graphical form (ibid.: 302). However, there were many obstacles to establishing the CAD system, which relied on a combination of general and specific techniques. This mirrored future, and ongoing, challenges posed by software development in general and BIM software in particular, namely how much leeway should be given to the architect or designer to program their own language, and thus create a hybrid between a ‘personalised’ and ‘standardised’ building modelling system. Coons’ research would prove further integral to generating a visual style based on computational and mathematical curvilinearity through surface modelling, forms that came to characterise iconic architecture at the end of the twentieth century.

Surface Modelling

Interactive curve design generated by numerically-controlled machinery was first put to use in the aircraft and missile industries. Fields that benefited from military-funded research in the Cold War years, including aviation, shipbuilding, and automotive design, became benchmarks for software developers and architects studying visualisation and calculation methods because of their work with surface modelling. Before moving to MIT’s Electronic Systems Laboratory, Coons was employed at the Chance Vought Aircraft Company, exploring curved surface elements for aircraft design during World War II. He furthered the idea of surface elements at MIT, where his research into a surface modelling method that would interpolate between any kind of curve (provided the curves intersected at the corners) resulted in what became known as the ‘Coons Patch’. Coons’ findings enabled geometric malleability within computer graphics and the simulation of complex geometries, and ultimately formed the basis for surface descriptions still used today such as B‑Spline and NURB surfaces.Footnote 12

In France, Pierre Bézier, an engineer with a background in geometric and physical modelling, and mathematician Paul de Casteljau, working at Renault and Citroen respectively, were testing graphic methods for curve design within the automotive sector. De Casteljau and Bézier innovated parametric notations that could design continuous curves, efforts that ultimately merged and came to be called ‘Bézier curves’ (Carpo 2017: 58–59). Bezier and de Casteljau’s research was initially driven by the aspiration to translate free-form curves into equations, thereby eliminating manual approximation from the design process. Eventually Bézier discovered a way to link design and manufacturing through mathematics and computing tools, spurring the creation of CAD and computer-aided manufacturing (CAM) techniques. Bézier’s research underpinned the CAD/CAM system Unisurf, launched by Renault, but also competing proprietary technologies developed by other software companies like the aircraft maker Dassault Systèmes, which were adopted in architecture in the 1990s (Bézier 1971; Carpo 2017: 58–62). In these years, Coons and Bézier witnessed the transition from manual to computational design across industries.Footnote 13 Both were part of an international interdisciplinary network of researchers advancing computational design theories and technologies, and crafting techniques for surface representation using CAD/CAM systems. Stressing the interrelated nature of new technological concepts and aesthetic and intellectual outcomes, Jeremy Douglass and Lev Manovich note that prior to the 1990s, “the only practical techniques for representing 3D objects in a computer was to model them through collections of flat polygons” (2011: 316). However, by the start of that decade, thanks to faster processing speeds and increased computer memory, architects could viably represent spatial form by drawing on Coons and others’ work from the 1960s, notably NURBS modelling methods (ibid.). Back in the 1960s and 1970s, another scholar was pivotal to investigations into curve modelling and how information is processed across industries, laying the ground for future BIM explorations.

At the University of Cambridge, Robin Forrest, a Visiting Research Fellow on the CAD and MAC projects at MIT in 1967 alongside Coons and Douglas Ross, contributed to the mathematics of shape representation in his role as a founding member of the UK-based CAD Group.Footnote 14 Coining the term ‘computational geometry’ in 1971, Forrest pioneered design techniques at the interstices of engineering, mathematics, and the burgeoning field of computer science (Forrest 1971, 2017). In the 1960s, only a handful of people were researching the development of curved surfaces within computer-aided design, primarily in the UK and the US. Information was openly available however through technical reports, “papers and views were readily exchanged and commercial confidentiality did not intrude” (Forrest 2001: 13). Forrest therefore had access to reports about early work on interpolatory parametric spline surfaces and the combination of cubics and rational conics at Boeing, and the rational parametric formations of conic sections at MIT (ibid.). In addition, after visiting General Motors Research Laboratories in 1969, where he could observe how Renault designed surfaces, he altered “the Bézier definition of curves and surfaces in terms of polygon vertices rather than polygon sides” (ibid.: 13–14). The resulting formulation would go on to be used in software codes for curve modelling, fostering a computational approach in the automotive and aerospace industries that privileged smooth and complex forms, which engendered an recognisable architectural style during the 1990s.

As Forrest recounts, the study of computational systems at Cambridge, which grew out of collaboration between mathematicians and engineers, was not without difficulties, because each group had differing approaches to solving geometric problems. The divergence, according to Forrest, stemmed from the fact that engineers focused chiefly on design, material, and manufacturing constraints, whereas mathematicians were more interested in abstraction. Other disciplinary hurdles that had to be overcome concerned graphics that were not as commonly used in the computer sciences as they were in engineering departments. While many engineers may not have grasped the importance of computational data structures, mathematicians and computer scientists had to warm to graphics as a useful mechanism for conveying information (Forrest 2017).

The Evolution of Modelling Techniques in Architecture: Gehry Partners

During the 1970s and 1980s various software was developed to provide modelling solutions within the field of automotive and aircraft design and manufacturing, but its adoption within architecture was slow. Even though Boeing, McDonnell Douglas, and Lockheed each had their own modelling systems at the time,Footnote 15 CATIA—software developed by Dassault Systèmes and used by Frank Gehry—eventually became a dominant player in the field.

The progression of digital models and their utility for architecture and building practice has been shaped decisively by Gehry Partners since the 1990s.Footnote 16 In 2002 software explorations in the AEC (Architecture, Engineering & Construction) industry led to the founding of the software company Gehry Technologies, acquired by Trimble Connect in 2014.Footnote 17 The successful use of digital models at Gehry Partners is related to a series of projects that the office worked on in the late 1980s and early 1990s. Among the first ventures to feature digital models in the construction process were the Fish sculpture in Barcelona and Bilbao’s Guggenheim Museum. Each project utilised digital models as decisive design tools.

To illustrate the specificity of how practitioners adapted and applied digital methods in situ, I will briefly describe the firm’s process. At the conceptualisation stage, sketches drawn by Gehry are translated into physical models built with paper and cardboard, the results of which are then scanned, rationalised, optimised, and documented using software—notably, a version of CATIA. This output in turn provides feedback to explore the design through physical models. While the designs themselves are not generated by a computer, they are recorded and refined through the interaction of digital and analogue processes. The need for timely and cost-effective implementation led Gehry Partners to explore methods of digital representation, editing, and fabrication in depth. For the Fish project, the office relied on insights gained from the cross-industry expertise provided by Bill Mitchell, head of the Architecture and Urban Design Lab at MIT, and Mark Burry, whose application of digital models on the Sagrada Familia I mentioned earlier, regarding calculating and generating curved surfaces. Staffers created a constructional system from the digital model of the Fish, which was used to provide data for cutting and installing the sculpture’s parts on the ground. This process was notable because everyone involved in the design during the planning and construction phase focused their communications through the CATIA model: engineers progressed the design through it, and fabricators cut and assembled the components according to the digital data. Problems that arose during assembly could be solved in the presence of all participants on site. The Fish was realised as a result without design drawings, that is; it was an almost paperless project—an outcome that was achieved through a specific mix of local craftsmanship, ambitious deadlines, and favourable investment conditions. Similarly, albeit on a different scale, the execution of the Guggenheim Museum depended on a practice in which the digital model could be applied alongside other building techniques. An understanding of the capabilities of digital tools gained in Barcelona facilitated new applications of digital technologies in Bilbao, where the task at hand was more complex. Through the digital model, nuanced analyses of the design could be carried out, providing a problem-solving infrastructure between design and building participants. The open (and less detailed) CATIA model enabled each company working on the project to complement data from the model with other forms of representation, for example, 2D drawings, thereby fitting in with local working methods.

The Fish and the Guggenheim Museum are particularly good proof that the innovative effect of the digital model could only unfold through its application and its specific use in a design context. This in turn depended on the willingness of the actors involved in design and implementation to engage in a form of communication based on the digital model. As Jim Glymph, senior partner at Gehry, points out, these conditions were the result of a convergence of technical and social practices, which created a site where the function of the digital model could take various forms (Glymph 2005). With the application of digital models at Gehry Partners, a reflection on the procedural nature of architecture became more crucial to the field as a whole. The use of the communication-enabling digital infrastructure required stepping outside existing rules, however. The actors in the building process depended on social, legal, and economic agreements that bypassed both industry regulations and conventions of visualisation. For example in Barcelona, by shifting liability concerns, the developer put the means of control in the hands of architects and fabricators. This meant that the architects no longer needed to provide physical documentation. Branko Kolarevic summarises the transformations associated with the CATIA model thus: “Software made for […] airplanes was used to develop and construct a built structure, [3D models] were used in the design development, for structural analysis, and as sources of construction information, in a radical departure from the normative practices of the profession” (2005: 31). Gehry Partners disregarded projective drawing techniques and the representation of structures through section, plan, and elevation in favor of versatile digital models that facilitated information exchange with other participants in the building process. The computational model extended the capacities of parallel projection so that, by offering a descriptive system that stores, manipulates, and visualises information in three dimensions, data about design, construction, and operational analysis could be inputted and interdependencies calculated. At the same time, these modelling processes were adapted and networked across various disciplines as needed.

Another architectural office that embraced digital fabrication was SHoP Architects, who early on leveraged digital tools to coordinate and integrate building systems virtually. Specialising in large-scale development works, SHoP notably accepts equity rather than traditional payment.Footnote 18 Because of this operational model, the firm is also responsible for construction, and so decided to use digital tools as a way of delivering projects on time and at cost (Garber 2014: 46). What is clear here is that BIM techniques are prone to a trend Reinhold Martin has identified, of digital visualisation techniques shifting from tools for modelling to instruments of production (2017). Finding different ways to do construction in the late 1990s spurred some firms to adopt business models “that replicated the entrepreneurial dynamism of the neoliberal political economy” (ibid.: 79). For instance, to respond to issues such as time constraints (in terms of how long it takes to complete certain tasks) and the exchange of data across different locations and sectors, some architectural offices developed their own operational models, consolidating knowledge from the AEC industry in one place in order to promulgate a practice that integrates finance, technology, aesthetics, and culture.

Conclusion: Outlook on Simulation Through Building Information Modeling in Architecture

Even though today proponents of BIM often make its role in design and building practice seem like a universal certainty, as the case studies and oral histories introduced in this article suggest, its uptake is neither seamless, universal, nor without historical precedent. What has occurred is that developments within computation and CAD have been integrated into BIM technologies, fostering a design practice geared towards creating a model that can generate various solutions, which can be optimised according to selected parameters. Digital models—which are dependent on the processing of data, enable the simulation of planning and building processes, and transmit data for fabrication purposes—strive to create a direct link between design and building. Subsequently, with the computer acting as a universal machine, an increased interdependency between the analysis and synthesis of design parameters has been established in architecture. In concert with these developments, new standards are emerging, such as the Model View Definitions (MVD) format—which determines how a BIM model is represented—along with universal data formats like the IFC. As standardisation becomes a prerequisite for collaboration between different actors and interoperability between the various software used on a project, there is a risk of creating reductive or overly simplified designs.

The ways in which this logic has impacted design at the turn of the millennium are manifold. Carpo has noted that BIM applications, reliant as they are on consensus between all parties and driven by the need to save costs and time, can lead to plain and utilitarian built work (2017). Simultaneously, BIM technologies have contributed to the realisation of the 1990s’ most visually spectacular architectural projects, like the undulating surfaces of the Guggenheim Museum (Carpo 2017: 142). This apparent paradox is crucial to evaluating BIM as a simulation mechanism: at one extreme, it produces nondescript buildings based on the lowest minimum standards, at the other, an iconic visual style that pushes the boundaries of architectural visualisation and fabrication.

When it comes to applying BIM technologies on a project, accessibility is vital. As Carpo observes, “Participation in BIM-based design is by invitation only, […] even though one could imagine a variety of other parties interested in the development of a building, including end users, communities, and even citizens” (2017: 141). In the post-war period, modelling information software was not as enclosed as it is now. Software was typically sold alongside hardware without additional costs and though not open source, it was often possible to make modifications to programs. The universities that adopted this technology wanted to create flexibility and dynamism in its usages and to foster a spirit of intellectual sharing amongst a community of users. This ideal was evidenced by the variety of software available on the market in the 1970s and 1980s. The current software environment, however, is characterised by fewer software providers covering more applications. Consequently, we see greater market homogeneity with fewer providers attaining commercial dominance. The popularity of cloud computing is further altering the landscape. However, no single application has all the functionality required for a particular project, so teams turn to many different applications in order to achieve their design aims. When many separate applications, each one tailored to a specific task, are available, the designer is required to move data between applications when using them, accordingly inspiring a rethink of potential problems in terms of these other uses (Bernstein & Deamer 2010). Given that software innovation is always accompanied by questions about the terms of social participation, accessibility, scope, and the actual application of any technological tool, regulated or restricted access has concerning implications for architecture as a field.

On the subject of whether BIM technologies are able to close the gaps in drafting and fabrication operations, I hope to have demonstrated that the design process is embedded in a variety of complex and situational technical, practical, and economic conditions, and must always include an assessment of the appropriate media and software for a particular project. Even though the contemporary use of computational models like BIM is connected to the goal of reducing the exchange between various media, the ideal building representation system remains bound to the challenge, formulated by Evans, of how to enter three-dimensional space. Contrary to the idea that design and building are one continuous process, the design process is in fact constituted by different media, and although standards are essential for exchanging data in digital formats, the impetus to challenge those standards prevails amongst architects. Nevertheless, as the early concepts developed by Eastman (who developed a manageable structure) and Aish (who introduced a temporal factor, and the ability to model alterations over time) indicate, a rethink of the notion of a building—based on information exchange and the annotation of data—went hand in hand with the introduction of digital technologies and a changing design environment. Rather than conceiving of a building as a fixed result, architects increasingly view buildings, as well as the design process itself, as an ensemble of possible solutions. Because architects can control and orchestrate design processes on a more detailed level, a continuous negotiation of the rules and goals of each project has become commonplace. In turn, a different knowledge set is required to challenge existing formats and standards, and forge one’s own tools. As the example of Gehry and others implies, aligning design and building remains a messy process, driven by economic, political, and social factors that impact software itself and are part of its use. Furthermore, there are some elements of a design that cannot yet be processed by a computer, namely the interplay between automation and social factors and idiosyncratic elements. A serious appraisal of the turn to computational architecture must account for these contingencies. While some BIM advocates argue for a design process without gaps, gaps in the form of critical distance remain essential to design thinking.