1 Introduction

The Deep House project (Fig. 1) was commissioned by a private party in Austria interested in a design that would encapsulate the features of modern residential architecture and transform them into something different. The emphasis is placed on the term different, not new. It is fitting that the client is a neuroscientist who wanted his new home to act like a mirror of his profession; he was very keen to see how we would design his home using methods that combine insights from neuroscience with computational design methodologies. In conversations with the client, he pointed out the intrinsically difficult nature of the ‘new,’ as our brains are not very well equipped to identify the features of the new. Our client pointed out that our brain operates like a giant remixer (Benedek et al., 2018; Maruzani, 2022). Thus, when we think of categories such as new, more often than not, we discuss the recombination of known features into results that are sometimes surprising. Something that is occasionally described as creative thinking (Boden, 1993).

Fig. 1
figure 1

Exterior view of the Deep House project, SPAN Baukunst, del Campo, Manninger, 2022

There is always the question of whether something truly new can emerge from a dataset of existing designs, as is the case in the Deep House project presented in this essay. In his book Architectures of Time: Toward a Theory of the Event in Modernist Culture (Kwinter, 2001), Sanford Kwinter, quite naively, assumes that the novel or the new can be described as follows:...novelty is simply a modality, a vehicle, by or through something new appearing in the world. The opaque way of describing the new most certainly does not serve its purpose very well regarding a solid definition of what new and different actually mean, especially for a conversation within the realm of Neural Architecture (del Campo, 2022). (See also the glossary at the end of the paper.) Taking this into account, I suggest clarifying this position for work created with the aid of artificial intelligence (AI), machine learning (ML), and deep learning (DL). AI can be considered an umbrella term for both ML and DL.

The problem of the new is, of course, deeply entangled with the debate on creativity, which has been part of the debate on AI and architecture in the last couple of years. Once the question ‘can AI be creative’ is put on the table, the question ‘can AI create something new’ emerges on the spot. Considering the deductive logic of these questions, it might be helpful to interrogate the arguments made for or against AI as a creative entity. The Grande Dame of AI and creativity research, Margaret Boden, argued that creativity could be divided into a series of processes that constitute three types of creativity, all of which aim to create new ideas. (Boden speaks here of novel ideas). The three types of creativity identified include combinational, exploratory, and transformational creativity. Combinational creativity consists of the ability to combine existing ideas in improbable ways to arrive at novel results. The architecture discipline is full of examples of combinatorial creativity, most visibly maybe in the mashups of the postmodern era, such as Hans Holleins Aircraft carrier in a landscape (Fig. 2), Philip Johnsons ATT Chippendale building, or Frank Gehry and Claes Oldenburg’s Chiat/Day building. An argument could be made that a technique such as StyleGAN, which interpolates results from existing datasets, is in some way related to aspects of combinatorial creativity (Fig. 3).

Fig. 2
figure 2

Hans Hollein, Aircraft Carrier City in Landscape, project, exterior perspective, 1964, an example of combinational creativity as described by Margaret Boden, Philip Johnson Fund, MoMA,© 2022 Hans Hollein

Fig. 3
figure 3

My Family. This series of portraits results from a deep learning process based on the portrait collection of the UMMA in Ann Arbor, Michigan. The uncanny nature of the portraits encapsulates some of the arguments made in this article about estrangement and defamiliarization as a valid category of artistic inquiry. Image: SPAN Baukunst, del Campo, Manninger 2021

They show a close relationship with techniques used in poetic imagery and aspects of analogies as a literary technique. Unfortunately, the problem is a little bit more complex due to the nature of curve fitting and the ability of NN’s to induce aspects of defamiliarization into the results, but let us keep it that way for the moment being. In addition, Margaret Boden described two more creativity concepts: exploratory and transformational (Boden, 1998a). Exploratory creativity involves the interrogation of structured conceptual spaces, which results in ideas that are not only novel but also surprising in their novelty. Exploring this ‘idea space’ resembles the nature of latent walks through the results of a data set interpolation. In a Generative Adversarial Network (GAN), the generative model learns to map points in the latent space to generate images. The latent space does not have any semantic information except the meaning applied through the generative model. Despite this, the structure of the latent space can be explored via interpolation between datapoints and performs robust vector arithmetic within the latent space, thus targeting specific effects on the generated images (Brownlee, 2022). Lastly, Boden talks about transformational creativity. This concept of creativity involves the transformation of one or more dimensions of space for new structures to emerge, which could not be achieved previously. The more fundamental the dimension is concerned, and the more powerful the transformation, the more surprising the new ideas will be (Boden, 1998b). The last two concepts of creativity, Exploratory and Transformational, cross-pollinate each other. Consider, for example, how architects interrogate a design space. There are constraints for the design based on program, budget, building code, structural loads, environmental conditions, cultural preference, etc. However, architects are able to navigate and explore the design space to find surprisingly ingenious solutions to the spatial problem at hand. Occasionally, an architect creates a transformational idea, think Alberti’s paradigm (Carpo, 2008) or Adolf Loos’s Raumplan (Jara, 1995) (Fig. 4).

Fig. 4
figure 4

Adolf Loos, Villa Müller, Prague 1930: “I do I do not draw plans, facades or sections... For me, the ground floor do not exist... They are only interconnected continual spaces, rooms, halls, terraces...” shorthand of conversation in Pilsen 1930. (Image: Matias del Campo 2022)

To round up Boden’s interrogation of creativity, it might be necessary to mention her differentiation between H Creativity and P Creativity. The results of H Creativity (Historical creativity) provide ideas that are relevant and innovative on a historic scale and that no one had before, a testament to groundbreaking inventions and discoveries such as Newton’s laws of motion, Einstein’s relativity theory, or Corbusier’s Maison Domino. P Creativity (Psychological Creativity) refers to the individual mind and the ideas it generates, regardless of whether someone else already had the same idea (Boden, 1993). A typical disposition in architecture, where the habit of maintaining the cliché of the lone genius results in the wheel getting reinvented repeatedly. On the other hand, Demis Hasabis, the CEO of Deepmind AI, postulates that creativity can be divided into three camps: interpolative, extrapolative, and inventive. In his lecture for the Royal Academy of Arts (Hasabis, 2018), Hasabis pointed out the fact that neither Artificial Neural Networks (ANN is the basis of most AI applications today) nor Biological Neural Networks (Human brains) are particularly good in the category of invention. (Our neuroscientist client confirmed that.)

However, humans still have an edge over machines in the category of extrapolation. Perhaps a quick run-down on the definition of these terms. For the term extrapolation, we will rely on a definition lifted from statistics, which describes extrapolation as a process capable of constructing unknown data points outside a discrete set of known data points (Zhan et al., 2012). Interpolation, on the other hand, relies on a technique capable of constructing unknown points in the latent space between known points of the given dataset. Machines are generally much faster than humans in interpolation; however, this process can introduce results that can be less meaningful and are occasionally prone to greater uncertainty. In the case of neural architecture, this is not always a negative result, as aspects of uncertainty can produce surprising and ‘different’ results (del Campo, 2022). Do these processes indeed produce something ‘new’? I would argue that no, they don’t. Interpolations and extrapolations are intrinsically related to the material used to interpolate or extrapolate. The results do not appear out of thin air; there is always some sort of existing dataset. Whether it is a collection of images or data for a machine or the memories in our minds for humans. As Parmenides of Elea put it: Nothing comes from nothing -“Ex nihilo, nihil fit” (Pruss, 2002) - or, to use a more modern expression: “There is no such thing as a free lunch” (Adam et al., 2019).

After this short excursion into some of the current concepts on creativity and artificial intelligence, it is possible, with some certainty, to deduce that Boden’s and Hassabis’s definitions of creativity are fairly speculative and lack the neuroscientific foundation to understand how creative processes are performed in our brains. Instead, I would like to rely here on the work of Anna Abraham, a neuroscientist focused on creativity, who has postulated a series of exciting concepts that expand our understanding of creativity as a neural process (Abraham, 2018). In particular, she gives insight into the semantic and cognitive neural networks at play in creative processes, specifically about the generation of original ideas. This might bring us to an end to the debate on whether AI can be creative. As the research shows, it might be a moot point to discuss whether AI can be creative if we lack a solid understanding of what creativity is and how to encapsulate it in a mathematical algorithm that we can apply in an artificial neural network. The same holds for the problem of the ‘new’. The more interesting point for me to ask is why the results created by artificial neural networks are so different? Strange? Provocative? Not only concerning its aesthetic properties, but there are instances where results created by NNs profoundly question standard design conventions, whether in the form of urban designs or, as in the present case, the planning of a house.

2 An excursion into estrangement and defamiliarization

As demonstrated in the previous sections, it might be a moot exercise to debate the creativity of AI. Much more interesting is the epistemological diagnosis of the results generated by NNs. Why they are so strange can be explained by the technical properties of curve fitting – however, in this section of my examination, I instead attempt to reveal the cultural signifiers present in the results. Here, the concept of estrangement comes into play. Introduced by Russian formalist Viktor Shlovsky in 1917 in his famous article Art as Technique (Shklovsky, 1997), the concept’s main characteristic consists of taking everyday objects or situations and inducing enough strange qualities into them to increase the attention of the observer. This is similar to the behavior of Attentional Neural Networks (AttNNs). AttNNs allow attention-driven multistage refinement for fine-grained text-to-image generation. Attention in neural networks mimics how humans can concentrate on particular aspects of their sensory input and blend out the rest around them (del Campo, 2021). The currently popular text-to-image algorithms, such as Disco Difussion (Developed by Designer/Developer/Artist Maxwell Ingham, 2022) and Midjourney (Developed by Designer/Developer/Artist Maxwell Ingham, 2022), can be considered milestones in the development of successful text-to-image applications. In the right hand, the result is a piece of art that is familiar enough to be recognized as a person, a car, an animal, or a building but strange enough to evoke our attention.

The method of estrangement was picked up by the genre-transforming German Theater director Bertolt Brecht (Fig. 5), who was directly inspired by Viktor Shlovsky. Brecht, however, described it as the ‘Entfremdungseffekt’ (the effect of making strange); it is also called a-Effekt (Féral & Bermingham, 1987) or distanzierende Wirkung (Féral & Bermingham, 1987), Deutsche Verfremdungseffekt (The rivals and epic theatre: How Verfremdungseffekt creates satire of sentimentalism, 2022) or V-Effekt (UKEssays, 2018), a central concept for the works of Brecht as demonstrated in plays such as Mutter Courage (Brecht, 1898-1956), The Life of Galilieo (Brecht & Willett, 1986) and The Good Person of Szechwan (Brecht & Willett, 1965). It includes applying techniques that allow the audience to distance themselves from participating on an emotional level with the play by reminding them of the artificiality of the theater performance. These methods include subtitles or illustrations projected on screens, actors breaking through the fourth wall to deliver exposition, summarize events, or sing a song. In addition, it includes abstract stage designs that do not depict particular locations but rather reveal the artificiality of the play by intentionally showcasing stage machineries such as lights and ropes. A method to control the audience’s identification with the figures on stage, thus enhancing the attention given to the reality reflected by the drama on stage. The Verfremdungseffekt was not intended as a purely aesthetical technique but as a political mission. To that extent, understanding Neural Architecture’s political imprint would be interesting. Does it encapsulate the political dimension in the respective datasets? If I do a dataset of Albert Speer architecture, does an NN produce Fascist architecture? This is highly questionable. If at all, I would instead take the position that labeling a dataset is a political act. Brecht was inspired by thinkers such as Hegel, Marx, and Shklovsky. Here, in particular, by Shklovsky’s concept of Ostranenie (Emerson, 2005). The goal of Brecht was to allow the audience of his plays to understand complex historical and societal developments using the abstraction provided by the Verfremdungseffekt (Brecht & Willett, 1964). To this end, Brecht provided the audience with an active role in the play’s staging. The unusual and strange stage effects force the audience to consider aspects of the artificial and the part of each object in actual events. In doing so, the audience was allowed to maintain an emotional distance from the problems in order to interrogate them rationally and intellectually (Britannica, 2020). What does all that have to do with architecture, you ask? Well, according to Katja Hogenboom (Hogenboom, 2014), Estrangement and Defamiliarization allow architecture to regain its role in society by emancipating itself and engaging in a new social commitment. In challenging the existing cliches of architecture and actively liberating opportunities for architectural progress, Estrangement allows breaking through the conventions of the architectural status quo.

Fig. 5
figure 5

Brecht’s techniques for estrangement included abstract stage design, projection of captions, actors breaking the fourth wall, showing stage machinery (ropes, pullies), songs – all intended to encourage the audience to engage with the play intellectually instead of emotionally. Exposing the artificiality of the play. Premier of Brecht’s musical, The Threepenny Opera, at the Theatre am Schiffbauerdamm, Berlin 1928 (image: British Library, London, UK)

The main argument in this article is that the results created by Artificial Neural Networks (ANNs), wither in the form of GANs, CNNs, or other networks generate results that fall into the category of Estranged objects. However, it is certainly not sufficient for a building just to have a strange appearance, to be able to provoke the necessary break away from current architectural conventions. Take, for example, Frank Gehry’s Guggenheim Museum in Bilbao (Fig. 6).

Fig. 6
figure 6

The estrangement of Frank Gehry’s Guggenheim Museum in Bilbao, Spain. Despite its strange appearance, it does not comply with the category of an Estranged Object because it does not question the status quo of -in this case-  the program of a museum. On the contrary, it remains a self-referential spectacle in steel, glass and titanium. (Ⓒ image Matias del Campo 2022)

Regardless of its status as an icon of a strange new form, it does not question the status quo of a museum program; it remains a self-referential spectacle in steel, glass, and titanium. It wraps a complex form around a conventional museum program that celebrates consumerism and musealization. Of course, using methods such as estrangement, subversion, reflexivity, the absurd, and similar techniques as an end to activate the spectator (or the user of architecture, for that matter) and provoke emancipation is not entirely novel. As laid out in the previous section, Estrangement and Defamiliarization are actively present in works from G.W.F. Hegel, Karl Marx, Viktor Shklovsky, and Bertolt Brecht. Of course, a thorough interrogation of aspects of estrangement and defamiliarization cannot be complete without at least mentioning Sigmund Freud’s essay ‘Das Unheimliche’ (the Uncanny) (Brewster, 2002). Freud defines the uncanny as deeply rooted in what is known to individuals as common or familiar. Deviations from the familiar -defamiliarizing aspects of life- result in emotional responses akin to fear and curiosity. In his essay, Freud demonstrates psychoanalytically why this is the case. This is more than just an exploration into a particular emotional response based on estranged stimuli; it is the basis for Freud’s theory on unconscious mental activity, which in return forms a core part of psychoanalysis to this very day. In the context of artificial intelligence, discussing psychoanalysis is a massive undertaking that cannot be fairly elaborated on in a short essay such as this one; however, the idea alone that the uncanny forms a major part in understanding unconscious mental activity might be an interesting area of exploration regarding the use of neural networks in the creative industries, and thus in architecture. Perhaps as a means to better understand our clients? Or as a basis for the interrogation of consciousness in general? Graham Harman is a recent author who explored uncanny aspects in his written work. His book, ‘Strange Realism, Lovecraft, and Philosophy’ (Harman, 2012). makes a valiant effort to understand estrangement and defamiliarization as a valid category of aesthetic interrogation and uses HP Lovecraft’s writings as a guiding principle. (Whether this elevates H.P. Lovecraft from Pulp to High Literature is an entirely different story altogether). After this potpourri of thoughts that demonstrate the intellectual tradition of exploring estrangement as a valid methodology, especially in theater and literature, let us circle back to its implications for architecture, especially in light of the application of AI. Of course, there is also a tradition of applying estrangement techniques in architecture. From strange figures and monsters in the Sacro Bosco of Bomarzo to Eisenman (who actually wrote about estrangement) (Eisenman, 1976), to Roland Snook’s strange tectonics (Snooks & Harper, 2020), there has always been a cohort in the architecture discipline interested in innovative provocations possible through a technique such as Estrangement. To understand the ambition of neural architecture and the theoretical framework established by discussing the ability of Estrangement to explain the phenomena observed when working with NNs in architecture design, I would like to offer a possible definition of what architecture in this plateau of thinking represents and how it differentiates from previous attempts to use Estrangement as a design method (Young, 2015). In discussing the affect of NN on architecture, it becomes very quickly clear that architecture is not an inanimate object but rather constitutes an animate object in constant transformation while being populated and gazed upon. Architecture is not a pragmatic reflection of its function; instead, it can be considered activated matter driven by an agile approach to information, behavior, and perception over time. The result is a material entity with aesthetic, organizational, programmatic, social, and cultural properties (Hogenboom, 2014) Estrangement in this frame of thinking not only constitutes an interesting novel aesthetic – it would not make it justice to be described in these limited terms - but it offers an opportunity to mobilize, provoke, and install emancipating alternatives, or as Katja Hogenboom put it ‘situated freedoms’ (Hogenboom, 2014) in complex conditions. The societal potential includes the interrogation of transformational micropolitics with the potential for a renewed concept of the private and the public space through methods of estrangement that result in an architecture that projects novel forms of emancipation. Explaining what these emancipations entail in their entirety would be a longer project, so I will leave it at that for now. The Deep House project, presented in this article, is an attempt to use estrangement as a method to emancipate the design of a house from a canonical approach to a progressive design of a one-family house project. Of course, estrangement can only be achieved when the result maintains enough familiar features to be recognized as a specific object, as in this case with a common modern house design. In the case of the Deep House, the owners desire to experiment with the feature-recognition abilities of neural networks. SPAN, the practice run by Sandra Manninger and myself, has done similar experiments before, such as the Robot Garden for the University of Michigan or the Generali Center design for the Mariahilferstraße in Vienna, Austria (Fig. 7), which was based on a dataset of brutalist building facades. To be clear, this is not an attempt to imitate an existing architectural style. Heck, it might not be about style at all! It is more an attempt to inform an algorithm with the aid of a specific dataset about the organizational features of mass and volumes. Once it learns these features, a deep learning approach is utilized in a generative role to produce models that respond to fitness criteria such as volumetric proportions, daylight diffusion, visual balance, and organizational properties.

Fig. 7
figure 7

Generali Center, Vienna, Austria, SPAN 2021

As in the example of the Generali Center, the Deep House project uses a pixel projection technique to fold the two-dimensional information present in the images resulting from a latent walk into the three-dimensional space. The caveat with this method is that information gets lost every time matter is folded into two-dimensional space and back to 3D. The multiple folding happening in this project means a large amount of information gets lost. Still, at the same time, that lack of information might help in generating surprising results in the StyleGAN2 process. The pixel projection consists of three planes (x,y,z) used as the basis for three alpha channel images containing two facades and a plan resulting from the latent walks (Fig. 8). The datasets of these latent walks consist of a midcentury modern house facade dataset and the respective plans. Adding a subdivision and displacement modifier allows intersecting the projection of the alpha images in the center of the XYZ cube. An intersection boolean modifier limits the results to the boundaries of the alphas. By adding a remesh and laplacian smoothing algorithm, it is possible to control whether the results are ‘soft’ or ‘hard’. For the Deep House, we opted for a relatively coarse remeshing of the resulting voxels, which partially explains the strictly rectilinear nature of the design.

Fig. 8
figure 8

Concept of Pixel Projection to create the model of the house

3 Deep learning and the Entfremdungseffekt - the Deephouse as an imprint of modern datapoints

Deep House is named after one of the core techniques used in its design. Deep learning. As laid out in the beginning, this article interrogates three aspects: artificial intelligence (AI), machine learning (ML), and deep learning (DL). DL is a subcategory of machine learning. DL can be described as a neural network (NN) with three or more layers. They are based on some of the processes in our human mind, which we think, we understand, and thus can be encapsulated in algorithms that we can apply in neural networks. To this extent, NNs attempt to simulate human brain behavior and allow it to ‘learn’ from large amounts of data. However, NNs still can not match human abilities in certain areas, whereas in others, they have already reached superhuman levels, such as object detection and text recognition. Deep learning can immensely benefit from adding more hidden layers to the NN, thus improving and optimizing the accuracy of the result. Deep learning drives a large number of AI applications and services that we encounter daily, such as fraud detection, handwriting recognition, voice-enabled commands, and self-driving cars. Automation, physical tasks without human intervention (think the robots in Amazon warehouses), and analytical tasks without human assistance are becoming increasingly common. So, how are machine learning and deep learning different? The main difference between the two is that deep learning relies on another type of data that it uses to learn. Thus, deep learning can be considered a different method from machine learning. In Machine learning algorithms, feature-engineering and structured data is crucial (data that is labeled) in order to make predictions. This means that the input data for the model are organized into tables; thus, specific features are defined by the labeling process or any form of preprocessing of the data to organize them into a structured format (Fig. 9). This does not mean that machine learning is a purely supervised method; it can be supervised and unsupervised, but it needs structured data.

Fig. 9
figure 9

Results achieved using StyleGAN 2 based on two distinct datasets. A dataset of facades of modern dwellings and the respective plan dataset. The training of these datasets was intentionally cut short in order to avoid realistic versions but  rather to achieve estrangement effects. Shying away from perfect curve fitting

I would like to finish this short excursion into the technicalities of machine learning and deep learning by briefly explaining the capabilities of ML and DL models to perform learning in three different ways: supervised, unsupervised, and reinforced. DL algorithms can learn which features are the most critical to distinguish the buildings from each other, for example, windows, bell towers, pylons, etc. Using gradient descent and backpropagation, the DL algorithm fine-tunes the results to achieve higher accuracy, thus augmenting the capability to predict a picture of a building with improved accuracy. Labeled datasets are commonly used in supervised learning processes. These processes allow to perform, for example, categorizations and predictions and require human intervention in the form of labeling. In contrast to supervised learning, unsupervised learning does not rely on labeling and instead detects patterns in the data and interrogates them for characteristics that allow the NN to cluster the data. In reinforcement learning, models learn to improve and increase their accuracy by performing an action in an environment based on feedback, thus maximizing the reward.

4 Conclusions the Deephouse as an example of neural architecture

In the previous sections, I attempted to interrogate the ontology of the Deephouse from two distinct directions: the cultural framework around aspects of estrangement and defamiliarization and the technical framework that allows introducing those results into the design process. I want to venture now into the epistemological examination of the results achieved by a deep learning process trained on two modern datasets (Fig. 9).

The datasets were trained using StyleGAN 2. Two specific data sets were used, a modern facade dataset and a plan dataset. I have already touched upon the problem of curve fitting previously. In general, deep learning aims to achieve a robust result, for example, by applying a polynomial regression (Todorovski et al., 2004). An example that I like to use to demonstrate this ability is the website ‘this person does not exist’ (machine-learning-articles/this-person-does-not-exist-how-does-it-work.md, 2022), which wows its audience with the ability to create photorealistic, convincing images of people who, well, do not exist. This is probably the most boring application for such an exciting algorithm in a creative context. Artists such as Mario Klingemann and Sophia Crespo have demonstrated that shying away from perfect curve fitting creates the uncanny, estranged, and defamiliarized results discussed in the previous section.

The Deep House was designed using the StyleGAN 2 algorithm against the grain. The goal was not to achieve robust curve fitting but to challenge the algorithm by intentionally using overfitting or data scarcity. However, as the final design shows, the algorithm was still able to learn crucial features of modern domestic design, such as proportions, the subdivision of blocks, the use of large open glass surfaces, the open plan interrupted by closed blocks, a lack of columns, intimate nooks and poches, programmed walls, etc., and implement them in the final results (Nelson & Wright, 1945). Of course, it is also a question of how long the network is trained, the weights used, and the dataset’s quality.

Perhaps a forensic examination would be a good starting point to explore the epistemology of this design. On first impression, the house appears as a flat box that cantilevers over a hillside.

The house’s silhouette is straightforward; the proportions are similar to those of the human body at 6.5 × 6.5 × 1. A terrace (A) with an embedded swimming pool (B) cantilever precariously over the steep part of the site (Fig. 10). The pool penetrates the interior of the house through the southern glass façade (C). The terrace and the box are subdivided into various rectangular sections, with distinct joints accentuating the division between the panels. Some of the corners of the house feature irregular crevasses and clefts (D – Fig. 11), strange remnants of the deep learning process. The exploded axon reveals the complex interior relationship.

Fig. 10
figure 10

Deep House, Axonometric view, SPAN 2022

Fig. 11
figure 11

Corners of the house feature irregular crevasses and clefts, SPAN 2022

What at first glance appears like a simple flat box reveals its intricate relationship between interior and exterior and a well-balanced relationship between volumes in the interior. The center of the house is defined by an ample open space that cuts through the entire north-south axis of the building (E). This north-south axis intersects an east-west axis at the center of the house (F - Fig. 12), dividing the house into nine territories. Four distinct volumes flank this open space. The sides of the volumes facing the central axes serve specific domestic purposes. The protruding and recessing volumes are activated and used as a kitchen (G), dining table (H), long bench (I), fireplace (J), bookshelves (K), trophy consoles (L), coffee table (M), etc. (Fig. 11). The floor itself is not just a flat plane; some areas recess into the floor, creating sofa pits, reading nooks, and private, intimate retreat zones for the client and his family (N).

Fig. 12
figure 12

The transept in the middle of the house, looking towards the South Eastern corner - trophy consoles are visible in the background

The east side of the volume is the home of the master bedroom (Fig. 13) and two children’s bedrooms. Each of them has its own bathroom. The elements of the bedroom and bathroom are defined by figures resulting from the deep learning process. (Though we are still discussing with the client whether we should attempt to run a deep learning process for each of the functions to see if even better results are possible) (Fig. 14).

Fig. 13
figure 13

The master bedroom

Fig. 14
figure 14

DeepHouse, exploded annotated axonometric view, SPAN 2022

The west side of the volume houses the study and library of our client; the fireplace (Fig. 15) cuts through the wall and serves the living room and the library. Below the rectangular box that houses the living room, sleeping quarters, and study, a lower level is carved into the mountainside. The car garage, service rooms, laundry, and storage are organized along the periphery of the outline of this negative volume. The center of the volume is dominated by the wine tasting room that opens up to a glass façade tucked under the cantilevering terrace above, providing ample shade in Summer (O). Apart from being a neuroscientist, the client owns a winery and enjoys hosting friends and family, thus the large wine tasting room. The cavernous space that resulted from the deep learning process presents itself as an opportunity to include the program of the wine tasting room in the cellar of the house; however, its openness to the landscape avoids claustrophobia. Combining the mass and heaviness of the concrete and stone grotto with a large oversized glass slab framing the landscape separates the interior and the exterior. The coffered ceilings hover over everything, and the thin glass stripes in the ceiling divide the large slab into smaller units. Everything described above sounds somewhat familiar and could also be the description of a very conventional house. However, what did we learn from applying this novel design method to the problem of the common domestic space in the form of the house? How deep did we go using a deep learning process? When we presented this project to our client, a conversation ensued about defamiliarization, estrangement, and the problem of the new. The client was surprised that the modern features of the project are still dominant, albeit in a very strange way. In many places of the house, you find elements and features that would have been sacrilege for a modern architect; however, their strange presence (both in regards to their functional and non-functional nature) makes this house interesting. Thus, the epistemology of this project can be read as a position that embraces estrangement as a method to engage with the built environment in an emancipatory way. An emancipation of the architecture from prescribed functions - it will find its function (Hollein, 1968). In this context, architecture is not an inanimate object but instead presents itself as an animate object in continuous transformation in the process of being used and looked at. Architecture in this frame of thinking does not adhere to pragmatic dogmas that reflect its function. Instead, it is considered activated matter, driven by a dynamic attitude towards information, behavior, and perception over time, resulting in a material entity that possesses aesthetic, organizational, programmatic, social, and cultural properties. The Deep House project, presented in this article, is an attempt to use estrangement as a method to emancipate the design of a house from a canonical approach to the design of a project of a single-family house. In a recent email, our client stated that the experiment worked; the house is indeed different.

Fig. 15
figure 15

South West corner of the deep house with the fireplace and pool

The goal of defamiliarization and estrangement is not to create something entirely alien or even ‘new’. As described at the beginning of this article, human cognitive abilities have a hard time recognizing anything that is truly ‘new’. It lacks the data points to construct a referential system for that ‘new’ object. A good example of a successful application of defamiliarization can be found in many of H.P. Lovecraft’s (Harman, 2012) works where something familiar is changed only so much as to evoke the uncanny. For instance, the voice of a beloved family member is changed to monstrous, the color of a familiar landscape changes to the unlikely, and a sailor is swallowed by a corner of a familiar house. Another case of estrangement lifted from literature is ‘The House of Leaves’ (Danielewski et al., 2000) by Mark Z. Danielewski. The book’s main character is a common house that seemingly expands enormously from the inside, opening ever new spaces whilst maintaining its exterior shape. As laid out in the section about defamiliarization in the theatre works of Bertolt Brecht, abstraction is utilized to evoke estrangement allowing the audience to distance themselves from participating on an emotional level with the play. To that extent, the Deep House project maintains familiar features of the modern house while introducing abstract and estranged artifacts. Strangely eroded corners, a chopped-up roof, and an interior that is entirely generated by the facade condition. The abstraction of the house is further emphasized by the lack of furniture, artworks, or any other details. This allows architecture to regain its role as a projection surface of intellectual endeaveur - by emancipating itself and engaging in a new social commitment. In challenging the visual cliches of advanced architecture and actively liberating opportunities for architectural progress - estrangement allows breaking through the conventions of the architectural status quo. The estrangement is not only present in the resulting house but is defined in addition by the design method itself in that it reverses the common workflow of architecture design. Instead of iteratively approximating the final result through a series of sketches, models, and plans, it takes a quasi-finished representation of a house (selected from a latent walk through a modern dataset) and generatively extracts the three-dimensional features to create a 3D model. The inherent familiar but estranged morphology and functionality of the Deep House means that the result clearly emancipates itself from cliches by embracing the ability of deep learning processes to extract familiar features from an architecture dataset whilst defamiliarizing them enough to evoke attention in the observer. Strange, but familiar enough (del Campo & Manninger, 2022).