Keywords

1 Introduction

Visualisation can be considered as any graphical representation of information and data, from hand-drawn sketches of an idea to immersive and dynamic virtual environments in virtual reality (Lange 2011; Pietsch 2000). As an inherently visual field, the planning and design disciplines have embraced the continuous development of visualisation technology, from the pioneering ‘before and after’ visualisations introduced by Repton (Repton 1980) to the early adoption of 3D modelling technology and harnessing both early works in both augmented and virtual reality (AR and VR) simulations (Rekimoto 1996) and the current cutting-edge mixed-reality technologies (Çöltekin et al. 2020).

Visualisations are central to how researchers/academics in the field communicate both on a qualitative and quantitative level, through displays such as greenspace designs around physical models, fluid dynamics simulations within flood extent maps, tracing paper sketches, and photographs of on-site experience (Raaphorst et al. 2017). The advent of sophisticated digital tools has begun to reshape the role of visualisation (Portman et al. 2015). In this chapter, we discuss how visualisations are being shaped by the advent of augmented reality, demonstrating the advances made to adaptive visualisations, and creating more interactive experiences that marry together analogue and digital design approaches.

Stakeholder involvement within the planning and design process (in architecture, planning, and landscape architecture for example) has become more important over time and recent years. The original focus was solely on communication. Now, a more participatory approach is used. Visual representations now form the primary means of communication between stakeholders. This includes people who work in the industry, the government, and the public.

Since the early before-and-after visualisations of Humphry Repton (Repton 1980), we have used developments in visualisation technology to improve the communication of the effects of proposed landscape interventions. Augmented reality (AR) has been used since its inception as a method to enrich visualisation and communication techniques. Stakeholders’ active participation is necessary to facilitate collaboration for successful project results. Workshop topics usually include design (Wang 2009), land-use planning (Arciniegas et al. 2013), and risk management (Schroth et al. 2011).

Stakeholder engagement workshops are naturally a multiparty process. Information is shared in real time with everyone. Any technical enhancement to the workshop process, therefore, needs to support a multi-participant approach to design to be fit-for-purpose. As part of Adaptive Urban Transformation project, we have developed dynamic visualisations as a method to encourage stakeholder discussion with the goal of informing design processes.

Maps are considered both a support research tool and a communication aid (Carton and Thissen 2009). They provide an established medium for decision-making offering a spatial context to help participants explore spatial patterns. Even though they are not always intuitive, maps are often the most used information source during workshops, especially for decision-making (Uran and Janssen 2003). However, as technology develops, new digital methods are being used to improve traditional stakeholder participation.

As Geographic Information Systems (GIS) technology becomes more widely adopted, digital technologies are often replacing base maps and tracing paper with map layers presented using GIS visualisations on a computer screen. Unfortunately, this approach still fails to capture the multiuser nature of its analogue counterpart. An evolution of this approach, large horizontal touch-sensitive screens known as touch-tables, is commonplace as an intermediary between hard-copy base maps and desktop-based GIS visualisations (Arciniegas and Janssen 2012).

In a survey of workshop participants, 70% said they would prefer a touch-table system over printed maps; 80% of respondents also suggested that the touch-table is an important addition despite its inability to be combined with the traditional maps and sketching approach (Arciniegas and Janssen 2012).

These results suggest that easy access to computational tools and digitisation can be useful because stakeholders can easily choose, combine, and consult maps of different types.

In contrast, purely digital models are an adaptable medium to illustrate landscape interventions. The ability to combine the spatial features of traditional scale models, with dynamic features such as progressive developments and simulation results, can add vital context to static visualisations (Lange 2011). In this project, we investigated the role that mixed reality can play in bridging the analogue–digital divide, using augmented layers of contextual information (Ghadirian and Bishop 2008).

Taking either the digital touch-table approach, or the hard-copy mapping approach to regional design problems ensures that the workflow that you use is bound to solely digital or analogue data, respectively. This concrete divide between analogue and digital workflows presents barriers to adoption and inclusion. The divide requires participants to have competence in each technology in order to be fully engaged in the workflow.

Physical models are often used to visualise proposed interventions, due to their more intuitive presentation of the spatial nature of the design intervention, without relying on intermediate symbolic representations found in 2D media (Duzenli et al. 2017). Nonetheless, physical models suffer from high financial and time costs to create, and they suffer from a lack of adaptability and utility as a project evolves. On top of these disadvantages, a physical model has no interaction with digital media, encouraging the same analogue–digital divide that plagues 2D representations.

Mobile augmented reality systems have been developed using custom hardware over an extended period. An early example of this is the Transvision mobile augmented reality collaborative work software (Reitmayr and Schmalstieg 2001). Through a mobile screen, users can view and move virtual objects. Further developments established the foundations of AR in workshop settings (Butz et al. 2002). Through transferable object ownership, it is possible for people to edit virtual objects together.

Due to technological advances, mixed-reality devices are becoming more affordable to use to support the design process. AR is being developed to better inform stakeholders about design issues and interventions in both on-site and off-site sessions (Portman et al. 2015; Wang 2009). Recent studies have used a mobile-based application to test augmented reality technology during public participation (Goudarznia et al. 2017). They found that, as part of an on-site presentation, participants feel like they can use augmented reality as a tool to find out more.

Shelton (2003) shows that using augmented reality can help people learn new things about the world around them (Shelton 2003). Soria and Roth (2018) show that by using augmented reality to engage our innate spatial cues through locomotion, they can improve a participant’s spatial cognition when asked to recall the details of a proposed landscape intervention in the real world (Soria and Roth 2018).

To support collaboration and stakeholder participation throughout the design process, a growing body of work has sought to use digital augmentation to expand the utility of physical models (Piga and Petri 2017). Model augmentation increases the utility and flexibility of a model by opening up new avenues for design, evaluation, and communication (Ishii et al. 2002; Walz et al. 2008). Realistic occlusion remains a barrier to the effectiveness of on-site visualisation.

Realistically embedding digital designs into a detailed physical model is a complicated process. With complex geometries, due to a lack of fine-grained depth information, mobile AR and environmental tracking suffer from the problem of physical occlusion (Kruijff et al. 2010; Wloka and Anderson 1995). Where occlusion fails, a physical object cannot appear in front of and thus occlude a digital object, regardless of the spatial orientation of the scenes. This is because digital augmentations are necessarily layered on top of the camera feed to create the final visualisation.

Realistic occlusion for complex geometry is a challenging task in AR application development. It requires detailed modelling and laborious spatial calibration on-site ahead of time to get acceptable results. This leads to either site-specific single-use application without environmental occlusion (Goudarznia et al. 2017; Soria and Roth 2018), a cumbersome set-up process (Haynes et al. 2018), or generic applications with no interaction with the surrounding environment (Tomkins and Lange 2019a, 2019b).

Haynes et al. (2018) have demonstrated an innovative, yet time-consuming manual approach to roughly mapping the spatial profile of an area using primitive shapes to create simple occlusion geometries (Haynes et al. 2018). While this approach is feasible for small areas, its application would be problematic for both larger natural environments and smaller complex geometries such as physical models. These limitations pose a problem in harnessing the immersive power of augmented reality visualisation to affect change in large-scale projects.

Despite the issues with accurate occlusion, in previous studies participants have reported feeling comfortable with using augmented reality as a tool to explore future interventions (Goudarznia et al. 2017). This lays the groundwork for further developments in the AR approach to enriching stakeholder participation.

2 Materials and Methods

To directly utilise the application within an ongoing workshop format, we created a tablet-based AR application to interface with the traditional paper media used in the workshop setting (Carrera et al. 2017, 2018). We chose this format because the application of AR ensures the current workflows can be maintained without modification due to the ability to layer information over reality. Pen-and-paper sketches can be augmented instead of replaced, maps can be dynamically enhanced, and physical models can be transformed digitally, without altering the underlying presentation. This is in contrast to other digital augmentations such as virtual reality (Song and Huang 2018) and touch-tables (Arciniegas et al. 2013), which require the transformation of the underlying representations from analogue media to digital data.

Each application covered in this chapter has been created in the Unity Game Engine. This provides application programming interfaces (APIs) for all current mixed-reality toolkits such as Vuforia, Google ARCore, Apple ARKit, and Microsoft HoloLens for augmented reality, as well as SteamVR and Google Cardboard, supporting all major virtual reality headsets and augmented reality devices.

The Unity Game Engine has a strong presence in the planning and design literature, with applications in public participation (Goudarznia et al. 2017), spatial cognition (Soria and Roth 2018), and future scenario visualisation (Haynes et al. 2018; Tomkins and Lange 2019b).

The Unity Game Engine provides several approaches to tracking. For our applications, we use both fiducial markers and 3D model tracking for tablet-based AR applications and full environmental tracking and mapping for the HoloLens applications.

The cartographic AR application uses visual anchor tracking. The Vuforia package in the Unity Game Engine supports this. Multipoint tracking enables users to easily track the base maps from a wider variety of angles, without losing tracking quality. This allows users to focus close-up on specific areas of a larger map, without losing tracking quality. Partial occlusion occurs when tracing paper overlays markers, and multiple obscuring participants surround the base maps. Multipoint tracking allows tracking to be maintained during the design process when using tracing paper over the base maps.

Crucially, Unity provides full access to the visualisation engine, allowing custom material shaders, a vital area of flexibility required to solve the occlusion problem in Tomkins and Lange (2020). Finally, to enable user interactions within our software, the Unity Game Engine supports an array of input modalities, including touch and gesture inputs for mobile devices. For our HoloLens approach, we use both gesture recognition and speech recognition to enable smooth user interactions.

For our visualisation software, we use off-the-shelf hardware, available to any research laboratory, which is well supported by the current game development engines for custom software development. For our AR applications, we use both a hand-held tablet device and a Microsoft HoloLens 2 headset. Each device has a different set of strengths and weaknesses, which render them suited to different stakeholder participation tasks, for example, workshops and guided tours.

For workshop-oriented tasks, we utilise hand-held augmented reality on a tablet device. Tablets come with a high degree of familiarity, creating a quicker on-boarding process in a multiuser workshop setting. Each tablet has a rear-facing high-definition camera, with which to capture the environment and detect tracking markers. While trackerless tablet-based AR is possible (Tomkins and Lange 2019a, b), it is less suited to a workshop environment, as it encourages non-cooperative interactions due to being untethered to any particular space.

Fiducial marker tracking overlays the digital augmentation over a fixed, highly salient image in the environment, such as a QR code. In a workshop setting, this has the advantage of providing a single, central location for multiple users to focus upon. Each user perceives the same digital model in the same physical place. While less suitable for workshops, environmental tracking enables the AR explorations to take place simultaneously in remote locations or in the field. Full environment tracking is available with the Microsoft HoloLens and is used to craft more immersive and dynamic experiences on site.

The HoloLens 2 hardware is extremely capable in terms of mapping environments in real time and has the necessary computing power to perform on-the-fly medium-scale spatial mapping. On the other hand, unlike tablet-based AR, the environmental requirements, such as low light levels, pose issues in creating an effective and widely applicable experience with the HoloLens. Using these devices, we can bring augmentations to both on-site visits and to a large array of traditional analogue media used in the workshop setting.

Throughout the Adaptive Urban Transformation project meetings, we have harnessed a range of visualisation technologies, spanning analogue and digital tools, including both traditional media and cutting-edge digital technologies. Figure 9.1 shows the range of media used in a typical workshop setting. Traditional analogue media included maps, tracing paper sketches and physical models. The digital media used included GIS data sets, 3D models, simulations, and digital touch-tables. These visualisation materials formed the basis of the adaptive visualisation tools developed throughout the project, including both augmented reality and virtual reality experiences, described below.

Fig. 9.1
Six photographs. a and b are maps and sketches of a project area. c. A 3-D model of a high-rise buildings. d. An aerial view of a town. e. Six people discussing a project. f. A digital model of of buildings.

Examples of visualisation media used in the adaptive urban transformation project. Photos Adam Tomkins and Eckart Lange

3 Results

In this section, we will detail the three major approaches to bridging the analogue–digital divide that we have developed throughout the project, focusing on the adaptive augmented reality applications developed for both hand-held mobile devices and the Microsoft HoloLens headset. Here, we detail novel approaches to digitally enriching the three primary analogue media used for landscape communications, maps, models, and the site visit. Moving away from traditional augmentations, we ensure that the digital additions are driven by the analogue counterparts, creating a tight cycle of analogue–digital interactivity, usually reserved for the analogue domain.

3.1 Enriching Maps

In this section, we introduce our tablet-based map augmentation application. The core aim of the application was to integrate the various data and visualisations created throughout the Adaptive Urban Transformation project and facilitate the next stage of designs that built upon a variety of data sources, including base maps, GIS data, hand-drawn sketches, and publication data.

In stakeholder participation workshops, digital and hard-copy maps, alongside other representation formats in 2D and 3D, are used extensively to support communication, spatial evaluation, and interactive decision-making processes. In this section, we present a novel tool to enhance traditional map-based workshop activities using augmented reality. Stakeholders use 2D base maps and static imagery to collectively assess, compare, and rank competing proposals (Arciniegas et al. 2013).

Analogue visualisations, such as maps, are natively enriched through tracing paper sketches and annotations. This activity plays a central role in the large-scale regional planning approach through the research by design paradigm (Nijhuis and Bobbink 2012). A major drawback is that these sketches, such as those on tracing paper outlays, cannot easily be reused throughout the project, as they are usually ephemeral creations to aid in communication, tied to a specific underlying base map. Lacking any specific registration, they cannot be combined with GIS sources, other than as a base map overlay. This lack of flexibility leaves sketches unable to interact with more complex contextual information, such as flood simulations, limiting their utility as a comparative design tool.

Here we can enrich the basic mapping paradigm to simultaneously allow both analogue enrichment through sketches and digital enrichment through augmentation. The digital realm lends itself much more readily to the continuous evolution of the design process while providing new levels of context to the analogue enrichments, through changing base maps, dynamic data visualisation, and even 3D model integration. Figure 9.2 shows how traditional sketches can be integrated with digital layers, to provide design context, and enable a more grounded discussion of site concerns.

Fig. 9.2
6 photos. a. A basic G I S data map with a tracing sheet. b. Augmented 3 D buildings on the tracing sheet. c. A person highlighting some areas. d and e. Geographic views of the specified area. f. River and its surrounding areas are elevated through G I S.

Views of the cartographic AR application: a the base map and tracing paper; b augmented 3D buildings onto the designs; c a participant annotating the map during augmentation; d overlaying 3D models and 2D designs onto the base map; e overlaying GIS data; f overlaying an interactive flood simulation. Photos Adam Tomkins and Eckart Lange

The application is designed to look like the base maps and layers that are used in traditional paper maps, combining established formats, such as GIS layers with more general image formats for layer flexibility. We use an array of real-world base maps as the anchor and introduce the ability to digitally switch the base map as desired, while the AR participant changes the base representation to support different aims at the same time. This separation of analogue and digital allows new comparisons to be made between different data sets at the same time, such as sketches and digital models shown in Fig. 9.2b. We use GIS base maps as our standard maps and allow users to add additional data on top of the base maps. The concept of layers has been generalised in such a way as to allow 3D digital models and dynamic simulations to be coexisting inside a single layer at the same time.

Utilising AR for map enrichment ensures that the fundamental experience of design remains the same, allowing for an inclusive experience that can take place regardless of the technology readiness level among participants. Our application is solely used to offer targeted enhancements to the design process when required. Expanding the visualisation from 2 to 3D enables us to repurpose 2D data, for example, height maps into an interactive flood map (Fig. 9.2f), by following the method described in Tomkins and Lange (2019a, b). This transformation to an interactive flood simulation thus makes our data more interpretable, as seen in Fig. 9.2. From this interactive layer, users can specify a desired flood scenario as the base scenario for a design intervention. This method also provides a metric to evaluate potential designs in a way that could not be done with a static visualisation.

As the digital base layers emulate and expand upon the large-format printed base map toolset, data layers aim to emulate the configurable digital layered approach of spatial GIS tools. Static data layers include raster data such as population density, historical flood risk, surface permeability, and vector data such as land use and waterways (Figs. 9.2d, e). Each of these can be used as a base map layer or visualised seamlessly on top of the physical base map. The dynamic layer brings real-time interactivity to the static data layers and the underlying physical media. These dynamic elements can be used to intuitively explore the dynamic nature of data sets, such as flood extent, with configurable water levels as seen in Fig. 9.2f.

The pen-and-paper design approach is limited to the 2D plane. In the same application, both 2D and 3D data are combined, as shown in Fig. 9.2b. We anchor 3D models onto the 2D base map using static, 3D model layers, such as in SketchUp. For our site, we had two large-scale, urban models. These represent a three-year process in the urban planning and design process. This allows the 3D model to be more geographically contextualised. As a layer, the 3D model can then interact with other layers, such as sketches or digital plans shown in Figs. 9.2a and d.

Here, we have shown how we can enrich maps to include dynamic data switching, interactive simulations, and integrate 3D models, to bridge the gap between the traditional design process and the data-rich digital world. These enrichment and integration enable new contextually informed design methods. While the tactile nature of hard-copy maps will not be supplanted by innovative technology, it can be complemented with new technological capabilities.

3.2 Enriching Physical Models

Alongside maps, physical models play a key role in the communication of design interventions. However, as models are slow to create and hard to change, their fixed nature limits their utility as both a discursive and a design tool. In this section, we describe our results in using mobile augmented reality to overlay 3D digital data onto 3D printed models, to enrich urban designs, as first published in Tomkins and Lange (2020).

Dynamically altering the appearance of 3D models allows us to compare competing designs in situ, replace outdated physical characteristics as the design progresses, and contextualise urban models within a larger urban context. We accomplish this using a digital twin and a novel accurate physical occlusion model. The aim of this application is to encourage design exploration in 3D space, enabling costly physical models to become evolving design tools.

We use mobile tablet-based AR to ensure a cost-effective multiuser set-up. Our digital model is designed in SketchUp, which provides the basis for both our physical model and digital urban designs used for augmentation. We present results using 3D printed structures, produced on an Ender3 printer. Previously, we have detailed the process of creating physically occluded augmented reality visualisations to enhance currently employed physical models, opening up new possibilities in dynamic augmented reality landscape architecture and urban design visualisation. We have shown that with a digital twin of a physical model, such as when we build models from SketchUp, it is possible to augment the physical model with additional 3D features, by using a novel 3D cut-out occlusion method (Tomkins and Lange 2020).

Using 3D model recognition and tracking, a digital twin model and custom occlusion shader pipeline, we can achieve fine-grained physical occlusion, on portable models. First, the digital model is aligned perfectly to the physical model, with model tracking. Next, the desired 3D augmentations are aligned over the top of the model, occluding the physical model. Finally, using the digital twin occlusion shader, we remove any 3D augmentations which collide with the physical model. With this happening in each frame, the user can truly explore an augmentation from any angle to see what lies in front, beneath, and behind the model in question. Physically occluded visualisations open up new approaches to model-based visualisation and communication tasks. This adds a layer of adaptability and interactivity which is impossible to create with stand-alone models. Here we describe a selection of applications of this process, which can be adapted to many new design and communication tasks.

We take as a case study the evolving physical models of the Pazhou Island development area, as shown in Fig. 9.3a. A physical model represents a single point in time of a project’s development history. Over time, the level of detail and purpose of proposed design changes throughout a project to reflect increased knowledge and changed vision or more in-depth planning. Our Pazhou Island case study included two important model design stages: the preliminary master plan and the approved master plan, which changes as the result of a three-year design process.

Fig. 9.3
Three photographs. a. 3 D buildings are printed on paper. b. Physical models are placed on the G I S. c. Physical models along with augmented models are displayed on the G I S.

Enriching physical models with augmented reality from top left: a base 3D printed building; b augmenting model with a greenspace design and updated new building façade; c embedding the physical model in a larger urban context. Photos Adam Tomkins and Eckart Lange

The novel physical occlusion allows the fine-grained augmentations to be seen, which emphasises the new ability to visualise dynamic greenspace designs below and behind structures visible from all angles, as seen in Fig. 9.3b. The result of including physical occlusion in mobile AR is that complex dynamic visualisations can be created and embedded organically within a physical space. Examples for enrichments could include land-use allocations and physical model design changes. With this new occlusion method, our digital augmentations can directly interact with a physical model that encompasses complex shapes like raised walkways, overpasses, and bridges.

The preliminary master plan contains a low-complexity urban design for illustrative purposes, such as the proposed scale of the development area, the architectural height constraints, required building density, and the desired green space ratio. In contrast, the accepted master plan contains a comprehensive urban design consisting of complex architect-designed buildings and the surrounding green and blue infrastructure to reflect the long-term characteristics of the urban environment. Figure 9.2b shows how an outdated building in the physical model master plan can be digitally altered in real time to an updated façade design.

This research introduced a novel AR application to convert static physical models into dynamic discursive tools adding new use-cases for physical models in design and communication roles. Figure 9.3b shows how the limits in the physical size of a 3D printed model can be addressed by embedding the physical model into a larger digital context of the surrounding area. This expands upon the use-case of physical models in both the information content and its portability while retaining the benefits of understanding given by a 3D model (Duzenli et al. 2017).

Here, we have shown how a novel physical model enrichment process can offset the significant upfront cost and time spent in creating these models, bringing dynamism and continuous evolution to an otherwise static medium. With this enrichment, a physical model could serve as a base design in which to embed progressive design changes and visualise and compare competing designs as well as analyse possible future interventions. Small-scale design changes can be located within a larger digital urban context. While physical models provide an intuitive understanding of future changes in a workshop setting, true site visits provide an unparalleled understanding of the topography and the larger context of a proposed intervention site, which cannot be captured by models alone.

3.3 Enriching Reality

Site visits provide a new testing ground for AR augmentations. For large-scale projects, site visits with stakeholders are key to successful projects. In this project, we sought to enrich the site in real time, using the HoloLens, to enable visitors to experience a potential future scenario on the very site in question.

We built our application using the mixed-reality toolkit (MRTK) spatial mapping library to build and interact with a continuously maintained spatial mesh of the world around the user. The dynamic mapping process inherent in the HoloLens platform ensures that no prior knowledge of topography, nor prior modelling of the intervention is required. Topographical modelling is one solution to the occlusion problem, when less fine-grained. In practice, much larger geometries are often required, such as the immediate environment. The HoloLens SDK builds the local geometry as the user explores, creating a new form of exploration-driven visualisations which we combine with procedural generation to create unique and flexible visualisations.

In Where the Wild Things Will Be (Tomkins and Lange 2021), we aimed to create a visualisation engine that is driven by the environment. It was important that it be easily reconfigured to support different visualisation tasks. This is achieved through parameterisable procedural generation of visualisation based on project-specific 3D models, combined with rules for their spatial composition. To combat the drawback of single-use AR applications, the adaptive visualisation engine is model independent, allowing for experiences to be created by specifying different models, spatial distributions, and temporal stages. For example, this could be used to visualise the visual effect of progressive logging regimes, seasonal changes, or wildfire rehabilitation. This flexibility is designed to spatially contextualise the predictions of theoretical models, real-world measurements, or tailored scenarios (Kuuluvainen 2016). This allows users to visually experience representations of various scenarios in situ, a key benefit when communicating complex and widespread interventions (Ceausu et al. 2019).

For our case study, we choose to simulate the natural rewilding process for an urban park (Fig. 9.4a). We selected it as an inherently difficult intervention to visualise and communicate, due to its site-specific and stochastic outcomes. Multiple configurations are shown in Fig. 9.4 with both a birch-dominated configuration (Fig. 9.4d) and a pine-dominated configuration (Fig. 9.4e). Rewilding efforts are stymied by the difficulty in communicating long-term, unguided processes. With such a large array of potential sites, traditional mapping and calibration are not possible. In contrast, we describe an adaptive visualisation framework, which emphasises the roles of user interaction, procedural generation, and topographical mapping.

Fig. 9.4
5 photographs of different parks place are labeled a to b, and d to f. An augmented reality image is depicted in c.

Enriching reality with procedural models: a the urban park site; b the HoloLens dynamic mapping process; c the resulting spatial geometry; d a potential late-stage visualisation of a birch-dominated regrowth; e an alternative pine-dominated regrowth configuration; f a close-up view of the augmented models. Photos Adam Tomkins and Eckart Lange

Site-specific outcomes are crucial to garnering support for rewilding efforts (Ceausu et al. 2019), severely limiting the single-use application’s role across multi-site projects. To adapt AR to this role in large-scale projects, we built an approach that can be applied in advance to unknown environments. In Where the Wild Things Will Be (Tomkins and Lange 2021), we show that AR headsets can now be used to develop applications that work with any local topography, without prior knowledge, across multiple sites without the need to explicitly build local geometries in advance (Haynes et al. 2018). A visualisation of the dynamic mapping process and the resulting maps can be found in Figs. 9.4b and c. Mapping was generated in real time, with a short walk around the urban park.

Figure 9.4d shows a possible pioneer stage, in conjunction with a small area of exploration. We see the addition of several larger birch trees, with smaller shrubs, and newly established maple saplings interspersed. Areas near the boundary of exploration can leave an abrupt gap in the augmentations. In contrast, in Fig. 9.4f, we see a close-up that shows the visual effect of walking through the dense shrubbery, including the level of detail available with close-up inspection. Unfortunately, we found that the visual efficacy of the HoloLens is strongly affected by direct sunlight, and as such these pictures were taken in the late afternoon. Three-dimensional models greatly affect the visualisations; less realistic models, without transparency, are captured better, as is seen with the low-definition pine models in Fig. 9.4e.

The adaptive visualisation approach is applicable to both research into the participation processes themselves and more generally as a tool for existing public participation processes. This application could be further used in more exploratory methods such as simulated walks through proposed parks, or visual impact assessment in a 360-degree panorama. We have demonstrated a headset-based experience that is both embedded in the natural world and applicable to a large range of sites, from small urban green spaces to open abandoned farmlands, without prior topographical knowledge. This opens a new avenue to landscape visualisation that is complementary to well-established hand-held AR use-cases.

4 Discussion

Throughout the Adaptive Urban Transformation project (AUT), we have sought to understand and enhance the role of visualisation within the planning and design process, with an emphasis on stakeholder participation. We focus on the case study of Pazhou Island in Guangzhou, China. To further this aim, we have developed a range of novel approaches to interactive visualisations focused on two core use-cases: workshops and site visits. We sought to provide new ways to support collaborative decision-making in stakeholder participation exercises, by enabling novel methods for interactive planning and design generation.

To achieve this aim, we have developed two distinct AR applications: one tablet AR application as discussed in Sects. 9.1 and 9.2 and one headset application as discussed in Sect. 9.3. We have demonstrated that the cutting-edge technologies can create new use-cases for traditional visualisation forms, without requiring any restructuring of current workshop practices. Integration and inclusivity form the central guiding principles to augmenting traditional workflows. We ensured that the traditional approach remained prominent and that a wholesale change of participation techniques would not be required. As such, our applications can be seen as purely an enrichment process to the current workshop and on-site approaches.

Section 9.1 describes a single application that can combine 3D and 2D data from across the analogue–digital spectrum, to enable real-time rudimentary data synthesis through interactive explorations. During the AUT project meetings, we engaged the participants by drawing from both static data and dynamic simulations to generate a dialogue around hydrodynamic interventions. The application successfully combined hand-drawn maps, 3D models, and GIS data during a live presentation. This experience demonstrates that with a sufficiently dynamic framework the latest AR technology can augment our collaborative workflows without replacing existing methodologies, allowing the applications to be used by participants who had no prior application training.

Section 9.2 examines how we can improve the current approach to 3D model enrichment. Ideally, in circumstances with a large-scale physical urban model, discrete design changes could be visualised in situ as the need arises. For example, with the provision of new intervention (building plans, the design of green infrastructure, sub-surface features, land-use designation changes, or the insertion of whole city blocks) visualisations could be accessed by stakeholders without having to create new models. While this would be an implausible task with a standard physical model, we have demonstrated that with a 3D printed model and its digital twin, we can easily transform a static model into a dynamic display piece. As part of the AUT project, we combined the master plan of a new building process with a variety of greenspace changes and adapted infrastructure projects from the literature into a situated example of the potential impact of the proposed design changes. This ability to use a standard model as a testbed for design interventions could improve our design approaches with increased spatial understanding afforded by spatially situated visualisations.

We considered how the adoption of this framework could enhance the role of visualisations in stakeholder participation practice, with respect to Arnstein’s ladder of stakeholder participation (Arnstein 1969). We argue that through a more immersive medium, mixed reality can enable a more reciprocal participation process in design consultations. For participants, mixed reality can serve as a tool to enhance spatial understanding and provide context to proposed interventions, allowing a more informed two-way communication process.

The AUT planning workshops took place over multiple sessions, with each session focusing on a specific activity, such as communication, analysis, or design. For an effective workshop process, the output of one workshop influenced the materials presented at the next, creating a workshop workflow. The primary session design ensured that all participants were familiar with the site to be studied. Next, participants were able to analyse the site to better understand the context of the site and to better define an intervention scenario. Finally, collaborative design workshops sought to use the distributed expertise of stakeholders to convene upon a variety of design solutions to solve previously identified concerns.

During the AUT project workshop, enriching the design process with augmentation encouraged the design choices taken in the exploratory session to be grounded in the whole project context. Using screen sharing of the AR application to a projected screen, the base map’s digital changes were visible to the entire audience as they were being developed live. This convenient way to cross-reference previous designs helped work to progress within multiple project constraints and stakeholders to incorporate prior knowledge. In addition, changing static base data to dynamic heat maps gave a new use-case for our underlying data, such as the flood visualisation, that would not have been possible with the static data alone.

In one workshop, we were able to describe where to put flood barriers, while simulating a progressive flood event, as part of the final presentation of different solutions along with an interactive data visualisation. We asked the designer the following question: ‘Do your flood plans address the areas of most concern?’ Using the intersection of overlaid digital data and hand-drawn designs to address the question, the application guided the design and discussion. Finally, we were able to change the project target to high population, low-lying areas without any prior planning or preparation using layer switching. Overall, we found that the ability to use these visualisations gave us a dynamic perspective on design processes.

Section 9.3 described the applicability of an untethered headset-based AR for displaying, exploring, and interacting with changes in the immediate environment. In response to identified workflow issues, we have proposed a new Adaptive Visualisation Workflow. This enables us to formulate the visualisation process in terms of the continuous interactions between the user and the environment using a procedural visualisation engine. This approach avoids many site-specific requirements, such as mapping and calibration in advance, while remaining generic enough to configure the engine to produce a large range of desired scenarios.

We find this novel approach promising, especially for the illustration of general principles, which are guided by local topography. This is at odds with small-scale AR applications that use hand-held tablet devices. While this is a less prevalent use-case, we have shown that there is scope for innovation gains in both workshop environments and on-site visits.

Secondary to the development of new methods was the evaluation of the technical readiness of the latest AR hardware with respect to the demands of stakeholder participation events.

We found that the HoloLens creates an effective medium for free exploration of augmented natural landscapes. In its current iteration, however, the hardware does present some significant limitations. Primarily, effective use of the hardware is hampered by bright sunny days, as the over-saturation of natural light limits the ability to see visualisations clearly. This is not an issue with hand-held AR devices, and as such an effective communication strategy may still require both hand-held and headset AR until the hardware improves.

In general, we find that AR visualisation techniques can add a lot of power to the traditional visualisation media used in the planning and design subjects. Care must be taken to use the most appropriate form of augmented reality to ensure that participants feel that the technology is not a hindrance to usage. A focus on integrating with existing workflows, instead of replacing them, has provided for a fluid workshop experience, allowing for an array of novel approaches to contextualisation and iteration of design challenges.

5 Conclusion

Augmented reality has been used as a tool to enhance communication and education since its inception, focusing on hand-held AR devices in recent years. In this paper, we have discussed the contrasting roles of both hand-held and headset-based AR devices in enriching the communication capacity of visualisations, such as maps, models, and on-site experiences.

This iteration in mobile AR visualisation techniques for landscape architecture addresses a variety of drawbacks in previous data augmentation methodologies for urban planning and design visualisations. It provides a novel tool to visualise a variety of data and scenarios within and around complex physical models which would not be possible with projected augmentations or previous mobile AR applications. The cartographic applications developed have demonstrated that dynamic, interactive augmentations can enrich both the collaborative design process and stakeholder participation process, enabling new formats of collaborative and interactive discussions.

While both the analogue and digital approaches to landscape architecture have limitations, we have shown that bridging this divide and enriching analogue with digital augmentations provides a rich set of new capabilities, allowing users to enjoy the strengths of analogue and digital approaches simultaneously.