Introduction

In recent decades, academic research has investigated the complex and vague design process (Horvath, 2004) and devised several common models applicable to general context (Ulrich & Eppinger, 2008; Design Council, 2007). While they have been adopted in many engineering and industrial design classrooms, there has been constant awareness of the lack of standardized models when teaching the students in architectural design studios (Hassan et al., 2010; Hong & Lee, 2018; van Dooren et al., 2014). This seems to originate from the fact that as instructors, architects have as many different educational perspectives and approaches as their design styles (van Dooren et al., 2018).

In contrast to the stalled state of the investigation of architectural design education, emerging new technologies have disrupted every inch of design practice. Not only have computer-assisted methods, such as parametric design, building information modeling, and digital fabrication, pushed the boundaries of aesthetic expression, but recent advances in artificial intelligence are also forecasting even more radical changes (Chen et al., 2019; Hebron, 2016; Matejka et al., 2018). With its capability to extensively and objectively analyze and generate design alternatives, a system with an unprecedented level of intelligence has great potential to weaken the status of studio instructors as the primary source of professional knowledge built upon the master–apprentice model. Here, we refer to such a system as “intelligent design assistance”—a term used to regard software with reasoning capability (Heitbreder et al., 1997; Mejasson et al., 2001)—to emphasize its flexible and smart nature in understanding design context and guiding designers’ intention in comparison to that of the traditional Computer-Aided Design systems.

Despite its significance and urgency driven by the surge of intelligent design tools (Chaillou, 2020; Newton, 2019) and the need for remote learning (Fleischmann, 2020), there have been relatively few studies on the impact of intelligent design assistance and its pedagogical implication in architectural design research. One exception is simulation-based design, a methodology in which simulation is the primary means of design evaluation and verification (Shephard et al., 2004). It includes zero-energy building design, which assists decision-making in designing energy-efficient buildings with computer-generated knowledge (Goldstein & Khan, 2017). The inherent challenge is that design choices in the early stages, being incomplete and uninformed by nature, have the highest impact on final performance (Attia et al., 2012). Another example of simulation-based design is Agent-Based Modeling (ABM), which has demonstrated effectiveness in predicting human behaviors such as daily work (Schaumann et al., 2015) and evacuation (Hong & Lee, 2018). ABM in its current form requires more studies for its seamless integration into the current design process model (Hong et al., 2016). Presently, the majority of simulation-based design studies seem to focus on devising technical solutions or securing scientific rigor (Attia et al., 2012; Shi & Yang, 2013).

To further investigate the opportunities and conflicts that highly intelligent design tools may offer to design education, we start from an educational environment distinctive in architectural design: the instructors themselves are designers with widely divergent teaching methods, and students are expected to defend and nurture their own concepts (van Dooren et al., 2018) when solving problems that are highly technical and deeply aesthetic at the same time (Schön, 1988). We then formulated the research question by observing architecture students’ responses to an ABM tool from three different perspectives: (1) how students utilize an assistance tool and whether we can group their behavior (Hong & Lee, 2018), (2) how the new simulation-aided design process is different from a traditional one, particularly in terms of the evolution of solution over iteration (Shi & Yang, 2013), and (3) whether students’ behavior is affected by the type of design problem given, i.e., concept-driven versus performance-driven. The following chapter has a more detailed review of the literature on these questions.

Background

Computer simulation in design

Most design challenges are wicked in that their problems as well as solutions cannot be easily formulated or evaluated (Buchanan, 1992). Design problems have numerous variables both within and outside the designers’ control, and there is no systemic way to exhaustively enumerate or model their complex interactions precisely. Moreover, experimentation in the real world, at least in large-scale architectural design, is unrealistic and leaves permanent side effects (Rittel & Webber, 1973). Architects in this sense are often required to turn to their imagination, guided by their own experience or external tools (Rittel, 1971).

With its ability to iteratively calculate interactions between design variables, a simulation not only estimates the performance of a design plan but also helps the designer understand the impact of relevant design elements on an intended goal (Kalay, 2004). In architectural design, simulation is becoming a more integral part of the design process as growing computing power is met with ever-increasing demands for higher standards, including realizing historically intractable design through fabrication technology; understanding how occupants recognize, experience, and respond to their environment; and minimizing energy consumption so that the net exchange value is zero (Attia et al., 2012). These are the concerns of both architects and engineers, who use different measures but have the common goal of enhancing the quality of the built environment while simultaneously meeting other conditions, such as cost and sustainability (Rittel & Webber, 1973).

As an experimental tool, we selected ABM software that simulates occupants’ circulation within a given space. The behavior of occupants, along with weather, are the two most stochastic variables in architectural design that have been the target of commercial simulation tools. Not only is the movement of a crowd itself of great interest in designing large spaces, but it also has great significance in building energy modeling where occupants are mostly treated as a static variable (Goldstein & Khan, 2017). Thanks to its direct relevance to space layout design, ABM has also been used in academic research investigating the impact of CAD tools on students’ learning (Hong et al., 2016).

Simulation-based design education

The recognition of the importance of software literacy in architectural education dates to the early 1970s. Rittel (1971) listed software skills as an essential part of the curriculum for architectural education, along with manual dexterity, a critical eye, factual knowledge, and problem-solving skills. The choice of such skills seems even more appropriate today, as the growing capabilities of software simulations are making it easy to model our reality more accurately. Nevertheless, there seems to be few academic studies on simulation education in architectural design.

Several early studies on the educational effects of simulation focused on promoting creativity (Hassan et al., 2010; Lawson, 2002; Michael, 2001), but their results are somewhat mixed and their implications are limited to the effects of representing reality through 3D computer-generated images. In a more recent study, Hong et al. (2016) observed how students use an ABM tool in terms of facilitating iterations of their design process. Hong et al. (2016) found that the iteration occurs not only in generating solutions but also in setting new goals (e.g., new spatial requirements such as coziness, privacy, and intimacy), which was attributable to the explicit visualization of human behaviors and environmental conditions. Hong and Lee (2018) went further and measured quantitative differences to understand the role of fire egression simulation in decision-making. All indicators—including problem discovery, confidence level, ease of access, and efficiency—improved thanks to explicit, observable representation and the iterable nature of simulation, which led to increased trust in evidence-based decisions.

In contrast to the scarcity of simulation research in architectural design education, there have been extensive studies on the effects of simulation in science education. Rutten et al. (2012) divided related studies into four categories: (1) comparison between traditional instruction and computer simulation; (2) visualization, including representation and level of immersion; (3) different types of instructional support; and (4) classroom settings and lesson scenarios. One prominent factor that distinguishes design education from science or engineering education is the dominant role of the instructor and the difficulty of evaluating students’ performance. As a conveyor of tacit knowledge, the instructor of a studio classroom has the exclusive right to choose pedagogical strategies for problems with no single optimal solution. In the present study, we particularly focused on the alternatives that a simulation tool may bring, and compared students’ learning patterns and their outcomes when given feedback from either an instructor or ABM software.

Conceptual versus performance-based design

We may anticipate that a simulation facilitates iterations of design improvements and generates a better outcome by avoiding fixation (Hong & Lee, 2018; Hong et al., 2016): the state of being unable to move forward to creating a broader range of ideas by being obsessed with the existing ones (Jansson & Smith, 1991). However, such causality can be less applicable when expanding the scope of the problem to a more general context. Problems in architecture are hybrid in that it involves both artistry and scientific knowledge (Schön, 1988). The former handles aesthetic and conceptual issues early in the design process, and the latter manages structural, environmental, and constructional requirements for the given spatial layout. The dichotomy is not always clear though; a concept of openness, for example, is implemented through layers of measurables (i.e., field of view, circulation, size and location of walls and windows), and conversely, engineering subjects (i.e., structure, illumination, and acoustics) can also be part of the overall design concept (Koile, 2004). Nevertheless, architectural design and engineering classification is reflected on academic organization, vocational qualification, and even in a staged design process with discontinuity when architects deliver completed design concepts to engineers (Shi & Yang, 2013). This practice can be the source of substantial engineering cost as aesthetic elements have a crucial impact on engineering performance. As a solution, it has been proposed that architects expand their expertise to include knowledge on the engineering aspect of designing buildings (Hong et al., 2016).

However, the process of reconciling design and engineering is not straightforward (Attia et al., 2012); there are numerous challenges for architects to overcome the knowledge barrier and truly integrate both aspects of architectural design. First and foremost, the low fidelity of prototype solutions in the early stage of the design process is the basis for the high variance of the estimates on a building’s engineering performance. Proactive exploration of design is particularly difficult in the presence of conflicting requirements (e.g., cost vs. energy saved) (Østergård et al., 2016). Also, delayed evaluation due to computation time may lessen the number of candidate solutions and thus the final quality (Goldstein & Khan, 2017). It is desired that design plans need to be represented in multiple formats for smooth communication between the participating entities (Shephard et al., 2004), and a usable interface is essential for accessibility (Attia et al., 2012; Shi & Yang, 2013).

There have been extensive efforts to address these issues, mostly with respect to building energy simulation: CAD tools that quickly create design alternatives, a stochastic approach to handle uncertainties in the early design, new optimization algorithms, and enhanced interoperability across different software (Østergård et al., 2016). There have been far fewer attempts like this from the perspective of an architectural design studio where the form and function of the space are the primary design parameters. Hong et al. (2016) found that in a studio class environment, while simulation stimulated students to manipulate quantifiable measures such as geometric or physical configurations (Tabak et al., 2010), it also inspired them to reflect on psychological and social goals. In our study, we observed how students behave differently based on the nature of the given problem: weather the task is based solely on the performance versus whether the task requires a holistic solution satisfying both conceptual and performance-related issues. This setup allows us to investigate the new role of students as the integrators of the aesthetic and engineering aspects of design.

Method

For the experiment, we recruited students from an architectural department who performed a design task twice: one with a human instructor and the other with a simulation tool. We analyzed their design processes and final outcomes.

Participants

In order to compare students’ usage of simulation tools in different contexts, we openly recruited students from a university’s architecture department, which offers a five-year bachelor’s degree and a two-year master’s degree. These participants included eleven undergraduate students (one junior and ten fifth-years) and one graduate student with professional experience, mostly internships at architectural design firms. The selection criteria were to match the simulation curriculum’s tentative target; those who have completed mid-to-advanced level studio courses. The sample size followed the convention of protocol analysis in design research that experiments up to two hours recruited up to nine subjects (Jiang & Yen, 2009).

Apart from the inherent limitation of protocol analysis on representativity, restricting students to a single university could generate results applicable to similar teaching and learning culture. However, the department not only had a faculty with graduates from all over the world, but the homogeneity in pedagogical style among the participants allowed comparison of the students’ behavior against the simulation.

Tasks

After signing the consent form, students were asked to redesign the 21st Century Museum of Contemporary Art at Kanazawa by Japanese architects Kazuyo Sejima and Ryue Nishizawa (Fig. 1). This subject was chosen because of the complexity of the museum’s spatial planning and its familiarity to architecture students. Participants were split into two groups with different tasks: to design a spatial layout maximizing smooth crowd circulation and to create a spatial layout with a unique design concept in addition to the smooth circulation requirement (“Appendix”). For the first group, the goal was to spread three types of pedestrian flow—visits to exhibition galleries, libraries, and lecture halls—as uniformly as possible throughout the entire space. These students were given a circular boundary within which a fixed number of rooms with respective functions and sizes had been placed. For the second group, the goal was to maximize a rich spatial experience in addition to achieving smooth circulation. While the types and number of rooms provided to the concept-driven group were identical to those for the performance-driven group, their rectangular base boundary was intended to afford more flexibility (Fig. 2).

Fig. 1
figure 1

21st Century Museum of Contemporary Art at Kanazawa. Exterior view (left, http://open-imagedata.city.kanazawa.ishikawa.jp/) and floor plan (right, https://www.kanazawa21.jp/en/12press/pdf/0925PressRelease9.pdf)

Fig. 2
figure 2

Base plan and spatial requirements for the performance-driven task (top) and concept-driven task (middle). An example of a crowd flow (bottom), provided

Within each group, students repeated the task twice in an environment with either a human instructor or a simulation tool. In the traditional instructor-led environment, students used tools of their own choice. All students were required to consult with the instructor at least twice (half-time progress check and final presentation) in addition to the short discussions (if any) between them. The guidance was confined to the development of the students’ own ideas. In the simulation-assisted environment, students familiarized themselves with an ABM software during an hour-long workshop. The base boundary was mirrored between the two repeated tasks so that the shifted entrances would help avoid students creating similar solutions from their short-term memory (Jin & Lee, 2019).

In total, there were four different experimental settings from two categories. The first category allowed for the comparison of behavioral differences between concept- and performance-driven design problems, and the second compared simulation- and instructor-assisted environments (Table 1). Participants were divided such that the comparison between education environments was within a single subject and the comparison between problem types was between different subjects. This was for a more rigorous comparison between educational environments, considering that comparison between problem types is more exploratory given the present literature.

Table 1 2 × 2 experiment design by educational environment and problem type

Tool

The adopted ABM tool was Anylogic® (www.anylogic.com), which implements a simulation of crowd movement within a built environment. Crowd flow is essentially controlled by placing walls, along which agents travelled from the designated initial position to the final one with optional stopovers. The types of stops include a region with an arbitrary shape or a spot with a waiting line (e.g., a ticket booth). The distribution of the crowd over time is visualized by a cumulative heat map where red areas indicate highly concentrated regions. Anylogic’s user interface provides intuitive ways to create spaces surrounded by walls and to control crowd flow by adjusting size, timing, and destinations.

The crowd flow task came in the form of a sequence of rooms to be visited by a certain number of agents. After reviewing the size and type of the rooms of the actual museum, we came up with three scenarios: visitors visit either (1) permanent and special galleries, (2) large and small lecture halls, or (3) art and music libraries. They were also scheduled to stop by a café during their stay. While visitors entered the museum at random intervals, we specified the duration of time spent in the lecture halls and galleries.

Measurement

Before performing the task, participants were given a questionnaire on their educational background and level of experience to confirm they met the selection criteria. During the task, both on-screen and off-screen activities were video-recorded to capture any design-related behaviors. Afterward, semi-structured interviews of the participants were conducted, asking about the motives behind any peculiar actions and their impressions of the simulation tools used in the task. Questions included (1) previous experience with simulation tools, (2) whether and how the simulation tool affected their design process and outcome, (3) their willingness to use the simulation tool again, (4) the impact of the instructor’s feedback, and (5) any technical difficulties or failed attempts. The intention was to reveal the potential impact of computer literacy on the students’ attitude toward simulation tools, in addition to the supporting evidence such as collected design process and outcome.

The recorded video has been transformed into labelled actions for protocol analysis. The action categories were based on those proposed by Bilda and Demirkan (2003). We removed categories for the definition of space, spatial relations, and 3D view since we predefined the sizes and functions of two-dimensional rooms. We added categories specific to our tasks, such as hand sketching and technical issues (Table 2). Each participant’s data was segmented and coded by three researchers independently using ANVIL, a free video annotation research tool (www.anvil-software.org). Coding was repeated until Cohen’s kappa value, an inter-rater reliability produced by ANVIL agreement analysis, was above the threshold of 0.7.

Table 2 Codes and their definitions for protocol analysis

Results

After the pilot test, we removed restrictions on the overall design time and spatial dimensions, which participants found too limiting. During the main experiment, one out of the twelve participants gave up during the second experiment.

Individual

The first research question was to identify individual’s design patterns when using an ABM simulation tool. As a basis for observation, we collected responses to the survey questions on four categories—motivation, satisfaction, effectiveness, and comparison against a traditional approach (Dzeng & Wang, 2017; “Appendix”). The highest value of a 5-point Likert scale indicated a more favorable attitude towards simulation (Fig. 3).

Fig. 3
figure 3

Eleven participants, labeled as p1–p11, had favorable attitudes toward the ABM tool in the order of effectiveness (0.94), motivation (0.91), comparison against a traditional approach (0.90), and satisfaction (0.87)

As the design process progresses, the architectural layout is built and changed. Our main concern is the evolution of intermediate outcomes following simulation runs. Based on how participants explored and modified the solution, we could cluster them into three groups: (1) comprehensive changes to the overall solution (p1, 6, 9, 10, 11), (2) changes confined to local regions (p2, 3, 4), and (3) few explorations of alternatives (p5, 7, 8). The first group included participants who had low mental barriers to using software features, although to varying degrees (Fig. 4a–c). The searches performed by participants in this group were usually guided by simulation results, but also included a case that experimented with manual shuffling (p6). The second group also used simulation throughout the design process, but the ensuing changes were less frequent and more restricted to the most concentrated areas, particularly lecture halls (Fig. 4d, e). The last group was characterized by a monotonous advance toward the final solution. In this group, the only simulation run was a sanity check of the completed solution (Fig. 4f, g).

Fig. 4
figure 4

Design process of the participants

The frequency and distribution of simulation runs throughout the entire design process testing pedestrian flow were consistent with the qualitative analysis of the design process. The average number of simulation runs were 8.8 (group 1), 5.7 (group 2), and 2.7 (group 3). Students ran simulations starting at the beginning of the process (p10), on an initial but fully-deployed solution (p1, 4, 9, 11), or on a near-complete solution (p2, 3, 5, 6, 7, 8). As to the Likert score representing the favorability of the simulation (Fig. 3), all participants in group 1 (except p6) gave higher scores than any of the participants in the other two groups, while participants from groups 2 and 3 showed somewhat mixed results.

Simulation versus traditional environment

Figure 5 compares the distribution of design actions over time between a traditional environment with an instructor’s feedback (top) and a simulation environment (bottom). Each row indicates a code in Table 2, and color-coded segments allow one to see specific actions by individual participant. Figure 5 shows three major trends differentiating the two environments. First, the insertion and adjustment of spaces (rows with yellow boxes) are faster, more frequent, and start earlier in the simulation—most likely due to the more intuitive manipulation of space in Anylogic® than in Illustrator, Rhino, Grasshopper, or AutoCAD in the traditional environment. Second, handling walls and doors (green boxes) was less concentrated near the end of the process and instead more evenly spread throughout the design process in the simulation than in the traditional environment. This without a doubt originates from how the existence of walls and doors is clearly brought to the user’s attention when running a simulation. Lastly, simulations were executed far more often than consulting the instructor (blue boxes). Noticeably, several participants constantly referred to the existing plan and replicated some parts in their own one prior to the instructor’s feedback (red boxes), which was not seen in the simulation environment.

Fig. 5
figure 5

Protocol analysis of the design process in a traditional (top) versus simulation (bottom) environment

The interviews revealed a more detailed spectrum of opinions regarding simulation. The least favorable opinion was that the results of a simulation were not worth the trouble of reverting a working solution. Participants in this group executed a single simulation run at the end and finalized their solution despite unseen shortcuts or bottlenecks (p5, 7, 8). The next critical group questioned the necessity or feasibility of considering simulation in the middle of the design process. They anticipated a substantial increase in complexity and suggested that a simulation should come strictly after the completion of the concept design (p1, 3, 4). Participants in favor of simulation told the interviewers that using a simulation was an enlightening or even a liberating experience, but still found its integration into their design process a daunting task (p2, 9). The most enthusiastic advocate group used simulation as primary guidance for such tasks as adjusting corridors or understanding the ticketing procedure. They accepted the simulation as a test of the architect’s imaginary intention against reality (p10, 11). There was also a single outlier who never ran a simulation and completely rejected the task’s assumption that museum space should provide smooth circulation (p6).

Regarding instructor feedback, many participants accepted the professor’s comments somewhat passively, comparing them to the voice of a client (p4), inevitable pressure from a grader (p5), or an unquestionable authority (p2, 3). Other more proactive participants accepted what the professor thought was relevant (p6), tried their best to make sense of these comments (p8), and used this feedback to balance their own viewpoint (p10) or choose the best alternative (p11). Those who had a strong stance about the instructor’s feedback tended to have a clear viewpoint of the use of simulation (p6, 10, 11).

Simulation and concept design

Figure 6 shows the results from the task that required both smooth circulation and a rich spatial experience. Participants used their own choice of software (top row) or the ABM tool (bottom row). Each participant had a unique design concept shared by both tools, most likely produced by repeated experiments with same design brief, despite one exception (Fig. 6d). Design concepts included a courtyard as a refreshing natural space during the transit between galleries (Fig. 6a), an indoor landscape consisting of solid and void spaces that becomes an exhibit in itself (Fig. 6b), service spaces (e.g., café, library, lecture halls) located at the museum’s center in order to enhance accessibility and wayfinding (Fig. 6c, d bottom), buffer zones along the passage in and out of an gallery (Fig. 6d top), and the role of a café as the core attractor and courtyards as separators of the flow of visitors (Fig. 6e).

Fig. 6
figure 6

Design outcomes of five participants (p5, 6, 7, 8, 11) using tool of their choice (top) and ABM tool (bottom)

When we investigated how exactly the circulation requirement interacted with participants’ formulation of a design concept, they revealed different but mostly opposing relationships. For some participants (p5, 7), the design concept was certainly related to the circulation requirement, but the simulation results turned out to be of little use. P5 (Fig. 6a) desired the transitions between galleries to be leisurely walkways similar to those found in nature, but the rush of the simulation’s agents to find the shortest possible route did not mirror this intent at all. P7 (Fig. 6c) intended for a staged, smooth infiltration of visitors, which could not be modeled accurately with the ABM tool. For others (p6, 8) whose design effects were predominantly psychological, simulation was not considered at all during the concept design phase but was run only once to check for any critical flaws. These participants used sketches (Fig. 6d) or spatial arrangement (Fig. 6b) as the primary means of design exploration. In one case however, simulation did work as a stimulant to create a working concept (Fig. 6e). During the search for new ideas, she observed how agents traveled and decided that the café could work as the circulation hub. While the actual implementation of this plan went through several variations, she held on to the initial concept and did not pay attention to minor inefficiencies reported by the simulation.

Figure 7 shows the design outcomes of six participants given only the smooth circulation requirement. We could find more similarities among them than among those in Fig. 6 in terms of the number of distinctive shapes and unique arrangements. This was not very surprising, considering that most participants referred to or even partially replicated the existing museum plan. The motivator behind this trend, gleaned from the interviews, was that the existing plan seemed to be a perfectly legitimate, working solution in the absence of a request for conceptual differentiation. This strategy was more tempting when there was no built-in simulation capability like that in the ABM tool; participants in Fig. 7 could only imagine visitors’ trajectories with their computer mouse. They expressed that the optimization-driven problem was something unfamiliar and foreign, and therefore beyond their scope of expertise, giving them little sense of accomplishment.

Fig. 7
figure 7

Design outcomes of six participants (p1, 2, 3, 4, 9, 10) using tool of their choice

Discussion

Grouping individual styles

When we looked at the use of simulation in the context of design iteration, there were three main types of design processes. The first type was used by participants who accepted simulation results as they were and used them to solve the given task. Simulations were run iteratively and the changes made to the solution in between were distinct in general. The second type was used by participants who used simulation more passively, such as only to resolve critical circulation issues in specific areas. There was no significant diversion or bifurcation of the solution based on simulation. Lastly, in the third type of process, participants ran a simulation once only on the completed solution, reducing it to a mere formality. They seemed to question the need or feasibility of using a simulation to complete the task. Overall, simulation usage followed its potential roles: the main driver for the solution, a navigation tool, a necessary evil, or a tool of little use.

The fact that participants with substantial changes used simulation extensively reaffirms that simulation is a facilitator of active exploration and helps avoid fixation (Hong et al., 2016; Shi & Yang, 2013). However, our new finding is that there are exceptions and subtle differences among them. First, iterations were not always derived from the simulation itself. One participant who opposed the simulation most showed explicit iterations (Fig. 6b), and another one used a sketchbook to keep multiple concepts active (Fig. 6e). Second, previous tool experience helped to form a favorable attitude toward simulation (QGIS by Fig. 7e, Lumion by Fig. 7f), but one participant felt comfortable at the first encounter (Fig. 7a) externalized as an early tinkering with the software in the design process. Lastly, however, they agreed on the benefit and limitation of simulation in concept-driven problems. Simulation helped understand the problem and could potentially provide objective measures for design decisions, but was of little use in creating a concept, and even the visualization interfered with its formulation by showing minor flaws (Fig. 6e).

Simulation versus traditional environment

When comparing a simulation environment against a traditional, instructor-led one, protocol analysis revealed that the availability of simulation facilitated not only more iterations of solutions but also the completeness of a solution by requiring spaces and walls/doors to be defined at the same time. On the contrary, students who had to face the instructor felt they had a mental barrier more often than they were assisted, which was externalized as frequent references to the existing solution.

When we studied the outcomes and individual interviews, the variety of the participants’ opinions turned out to be more prominent than the differences between the two environments. For example, the design processes and outcomes in Fig. 6 showed more commonalities within each individual than among different individuals within each educational environment. Specifically, many participants viewed the performance requirement as separate from their design concept and found its integration intractable, at times leaving design flaws raised by a simulation untreated. More than a few of them also deferred simulation runs until the last minute, which made reconciling these problems even more difficult. However, fewer students who had a favorable view of simulation regarded the task as a test of the fit between the problem and the tool, rather than as a test of their personal skills, and embraced feedback from both the simulation and instructor alike. The single outlier displayed a similar attitude in that he insisted on selectively accepting both sources of feedback and created a solution that satisfied him.

The lesson here is that while simulation had the overall effect of enforcing the iteration of solutions, not only can the details of an individual design process differ, but it is individuality itself that is more influential than the given educational tool. This implies that simulation education should be based upon students’ various backgrounds and educational levels, and be preceded by an effort to establish students’ own views of different problems and tools. Additionally, we propose that simulation software implement a real-time but non-intrusive feedback mechanism to enhance the mental and physical accessibility of simulation. At a more fundamental level, it is desirable for a simulation tool to promote solution branching in the early stages of design to achieve true iteration; we identified only a couple of participants who worked on multiple potential solutions in parallel.

Simulation and concept design

The analysis of the interaction between design concept and performance requirement revealed two cases when simulation failed to support concept formation. In the first case, the swarm movement of the simulation did not match the relaxed, individually diversified walks that the design concepts postulated. At the sight of a rushing herd, participants quickly lost trust and interest in using the simulation. In the second case, the ABM tool simply did not offer the metrics needed for the design goal, such as psychological or aesthetic effects from a spatial experience. One participant even rejected the idea of using simulations for such tasks by commenting, “it is design that defines behavior, but not the other way around.” In both cases, the simulation was not flexible enough to meet the designers’ expectation, whereas an overall design concept could be transformed into a physical reality, its evaluation measures did not align with those for the simulation’s performance.

When the design concept requirement was lifted, the entire design brief disappeared rather than the performance requirement becoming the primary goal. Participants justifiably reduced the design task to an optimization problem, but also removed themselves from the obligation to pursue something unique. This turned into blindly copying the existing solution and losing their true motivation and role as a creator.

From these observations, we realize that it is essential for a simulation tool to support design concept formulation in order to be a versatile design platform. In other words, our results propose that, in addition to efforts to overcome technical hurdles, simulation tools should consider a top–down approach starting from the designer’s mental space. For example, whereas a breakthrough in simulation allowed locomotive parameters to reflect individuals’ psychological characteristics, more progress may come from a study on predicting longer-term social and psychological effects. Some students were concerned about taking on a greater cognitive load from juggling more parameters, but as one participant demonstrated, users can adapt and pick the best strategy for concept exploration. Ironically, multiple participants mentioned that they usually have no problem with handling design concepts and space programs simultaneously, indicating that the integration of a simulation into the concept design process may be a matter of content and training. With proper design and education, the availability of simulation software can effectively widen the choices available for concept design.

Our research is limited in that the small number of participants prevents any statistical judgement and in that some participants reused the same design concept in repeated design tasks, which may have affected their concept formation in unexpected ways. However, this study provides an empirical evidence that, in order for a simulation tool be more integral part of design education, it should offer (1) high-level control of concept-related features, (2) auto-completion from sketchy input, and (3) adaptable support based on the user’s skill level. The venue for future research might include how these functions can be designed, implemented, and presented so that students can explore larger problem space and develop their own perspectives and styles.