Keywords

1 Introduction

The built environment is increasingly integrated with information technology due to advances in wireless technology [1]. Progress has since been made in integrating geometry with information to simulate building performance. However, evaluating the ergonomic performance of the end user or their embodied experience while interacting with these pervasive systems remains a challenge. In this paper, we first describe the current challenges for design visualization when architecture is increasingly becoming pervasive. After identifying the challenges, we elaborate on the development of affordable immersive VR systems and their integration with off-the-shelf hardware and software to address these challenges.

2 Challenge: Visualizing Designs and Human-Environment Interactions

Human interaction with architecture has dramatically changed in the last few decades. From static, non-responsive spaces, the façade and internal surfaces have become more interactive and responsive to stimuli from the environment (e.g. light, temperature, humidity etc.) and human behavior (e.g. movement sensors, auditory sensors, etc.). During the past decades, how humans interact with computers and information has significantly changed. The notion of interacting with traditional input devices such as a mouse or joystick is giving way to gesture-based and full body motion sensors. Visualizing architecture has evolved from exploring geometrical form to building information modeling, optimizing construction, responsive architecture and simulation of building performance with respect to material properties or energy among others. Recent efforts examining performance in architecture has given less attention to the notion of human agency compared to these trends. Visualizing interactions with the environment is a challenge whether it is simulating behavior of special needs populations (e.g. frail elderly) in ordinary settings or even that of normal adults within pervasive environments.

In this paper, we start by examining the development and validation of affordable virtual reality (VR) systems. This is the first step towards visualization technology that is experientially comparable (though it may not be functionally or representationally equivalent) to the corresponding real world. We then embraced the notion of human agency and attempted to simulate human behavior by integrating motion capture with virtual reality. Then we expand on this idea to develop intelligent agents to explore behavioral simulation in multi-actor and pervasive environments. We conclude by examining the use of psychophysiological measurement tools to assess empirically the quality of the human-environment interaction. In doing the above, we draw on theoretical ideas from architecture, HCI and media psychology. We also discuss technology implementation strategies using techniques borrowed from the entertainment industry and illustrate it through the development of affordable, accessible virtual reality technology and off the shelf hardware.

3 Immersive Virtual Environments for Simulation and Assessment of Human-Environment Interactions

Over the course of last 14 years and through the development of three virtual reality labs based on the VR-desktop approach, we have sought to represent as well as interactively explore architecture that is becoming increasingly pervasive. Design students have been using these labs to explore architecture as expressive elements focused on building form and the layers that comprise the form as identified by [2]: façade, interior and structure. These VR systems initially did not support embodied interaction, but interactions mediated through input devices like joysticks or a wireless mouse. Our labs have taken a distinctly human-centric approach focusing on human-environment interactions, instead of focusing on building information modeling (BIM) and building performance with respect to materiality and energy. In this section, we describe our conceptualizations of these environments and give illustrative examples of our work. Our work in appropriating and integrating immersive virtual reality evolved in three broad stages with some overlap.

3.1 Context: Undergraduate Design Studio and Immersive Visualization

While teaching studio courses for beginning students of architecture, the authors have observed that students have difficulty visualizing space. They have difficulty imagining both the true volume of space as well as its experiential qualities. Envisioning the experiential aspects of a space under design is still a challenge despite improved photorealism now achievable through computer-generated renderings. We experience space while moving through it, i.e. from multiple points of view rather than a single static viewpoint. Most tools for architectural visualization are rarely at full scale and therefore necessitate a mental leap on the part of the designer to accurately capture the true extent and experience of the space. Virtual reality, which draws its spatial paradigm from architecture and the narrative and navigational models from film and multimedia technologies help overcome this challenge. It offers an entirely new way of seeing, inhabiting, and designing space. While virtual reality systems are very useful for visualizing the experiential aspects of architecture often, their prohibitive costs and operational complexity make them challenging to use. We addressed this challenge in the development of our virtual reality labs and integrated these solutions with the existing workflow. In doing so, we also drew on theories of representation and as well as that of media psychology.

3.2 Phase-1: Experiential Congruence for Architectural Representation

Development of the IEL and the iLab. We developed three iterations of the Immersive Environments Lab (IEL) at The Pennsylvania State University and more recently the Immersive Environments Lab (iLab) at the University of Missouri to bring immersive virtual reality within the reach of designers. While there are differences in technical implementation, we developed both the IEL and the iLab based on the desktop-VR approach [3]. The desktop-VR approach uses off-the shelf commodity computers and the familiar windows desktop environment. This lowers the bar for entry and extends the reach of these virtual reality systems. Both VR environments use a multi-projector, rear-projected display driven by graphics workstations running stereo-enabled applications on the Windows operating system. The IEL uses a passive stereo projection using three sets of dual projectors and the iLab uses active stereo projection using 3-D projectors. See [35] for technical details of the IEL implementation, [6] for that of the iLab. While affordability, accessibility and adaptability to existing workflows were important pragmatic considerations, we also drew on important theoretical considerations. Two important theoretical considerations are discussed below.

Spatial Presence as a Design Goal for Virtual Environments. We conceptualized these virtual reality environments as technologies that enable spatial presence [7]. Spatial presence is commonly defined as the subjective experience of “being there” in a mediated environment, forgetting one’s immediate physical surroundings [8]. Presence enabling technology like virtual reality can help designers to immerse themselves in a virtual space under design and assess its experiential qualities. Spatial presence is a multi-dimensional concept and [9] have identified a number of factors that can influence a sense of presence. [10] pointed out the extent and fidelity of sensory information as a function of screen size, resolution and field of view are important variables that influence the sense of presence. The design goals of virtual reality systems also correspond to the self-location and action possibilities dimensions of the spatial presence experience as later explicated by [11]. Though not driven completely by empirical data, we were cognizant of variables that influenced spatial presence while developing our systems in addition to that of navigability. Early usability evaluations of the IEL received positive feedback from the students and they cited stereoscopic projection, large screen size and navigability as features that enhanced their design evaluation and communication [3] (Fig. 1).

Fig. 1.
figure 1

Design review using 3-D display on the right screen and drawings on the left in the first iteration of IEL development.

A Multi-Modal Approach to Virtual Reality Environments. During the installation of the first iteration of the IEL, we noticed an interesting phenomenon. We observed that students appreciated the immersive quality of the IEL experience but, often used at least one of the three screens to present orthographic drawings or other two dimensional images. This led us to refine our conceptualization of the role of virtual reality in architectural design. We also refined our approach by drawing from the literature on design cognition, particularly in the evaluative or critique phase of the design. The emphasis was no longer on the immersive experience, but rather focused on enhancing our understanding of architectural design for critique. We drew on the work of [12] and the need to facilitate drawing connections between different aspects of the design, which are often communicated through different representations and often via different modalities. We started approaching the lab as a multi-modal virtual environment where immersive virtual reality was seen as belonging to the larger milieu of interactive multimedia tools. Further development of the lab took into consideration the role of digital tools in each stage of the design process, its adaptability to the existing workflow issues of representation and perception. We developed a multimodal virtual environment prototype using VRML and HTML and evaluated its viability in the IEL [13]. The development of the iLab was later informed by the positive feedback for this prototype and a more robust implementation was done there (Fig. 2).

Fig. 2.
figure 2

Design review at the iLab with 2-D and 3-D imagery

Validation of the VR-Desktop Taking a Media Effects Approach. The media effects approach to architectural visualization builds on Sundar’s [14] approach to studying communication technology. Media effects research, for most part, takes a quantitative approach, identifying media characteristics as causal variables and cognitive, affective as well as behavioral responses as dependent variables. The media effects approach for architectural visualization has two important characteristics. The first is its variable-centered nature where visualization tools are conceptualized in terms of structural and content variables. The second characteristic is the emphasis on quantitative measurement of the dependent and mediating variables. We conducted systematic evaluations of the various display and interaction affordances of these systems and, have been able to identify their relative contributions to spatial presence and spatial comprehension. These controlled experiments [1517] found that display factors like stereoscopy, field of view and screen size had significant impact on spatial presence and comprehension. In addition, there were significant interactive effects between these variables showing the compensatory effect some of these variables have. Two of our recent studies [17, 18] also found significant effects for navigability. These findings have important implications for further development of VR systems whether for large-screen displays or for head-mounted displays like that of Oculus VR.

3.3 Phase-2: Embodiment and Improved Functional Isomorphism

Architects are primarily concerned with the design of static artifacts. Yet, the success of the designed environment depends on its affordance to facilitate human behavior. Human behavior in a given environment is dictated by cognitive intent, environmental affordances for action and behavior of others. Predicting environment-behavior interactions in familiar environments and conditions are relatively easy. Predicting the behavior of users with physical (e.g. frail elderly) or cognitive disabilities (e.g. patients with dementia) is more difficult. After successfully implementing the immersive visualization features at the iLab, we focused on integrating motion capture technology with our VR environment. In addition to creating a greater sense of embodiment, our goal was to incorporate an ergonomically accurate integration of human behavior in real world with a virtual representation of design for evaluation. We chose healthcare settings and environments for the disabled as our research context given the complexity of human-human and human-environment interactions (Fig. 3).

Fig. 3.
figure 3

Screenshot of motion capture data in the iLab for an assisted living scenario

We have developed our 3-D simulation approach building on the human factors framework proposed by [19] for improving patient safety. We started with an accurate 3-D depiction of the physical environment and human behavior in a given scenario (e.g. a medical procedure like intubation) using virtual reality. Using motion capture technology, human performance was captured and integrated with the virtual reality model of the physical setting. This will yield both precise quantitative data as well as accurate 3-D visual representations, which will help to analyze performance. We will then explore design improvements in the virtual prototype. Through motion-capture of interactions with environment, our work will go beyond recent work that use virtual reality mock-ups [2022] or motion capture [23] independently in exploring healthcare designs. Our current work include scenarios of assisted living activities. Our goal is to develop detailed virtual reality environments as well as digital characters that can be used to test an environment under design.

3.4 Phase-3: Agent-Based Simulations

Predicting human behavioral dynamics during a crisis such as rioting crowds or a building fire is extremely difficult. Successful outcomes in a crisis depends on numerous factors including the adherence to building codes and quality of trained emergency response teams including EMT, fire and hazmat personnel. Studying human-environment interactions during crisis scenarios, predicting behavior and training personnel for dealing with such situations is a challenging task. These offer limited flexibility for new scenarios. Part of our ongoing efforts are directed at simulating these complex interactions between humans and the environment. We are tackling the challenge by creating behavioral agents using motion capture tools and artificial intelligence (AI) authoring software from the entertainment industry coupled with 3-D visualization techniques from architectural design. These virtual agents can then be used to evaluate the capabilities of a given space to accommodate specific behavior. In simple terms, we are currently evaluating a three-step workflow to achieve our goal of simulating behavior.

The first step is to capture movement from live subjects using a high fidelity, multi-actor motion capture toolkit and then to map the movement onto virtual characters using character animation tools. In a recently completed segment of this project, we successfully captured real world human movement and mapped it to a digital character model in an immersive virtual reality environment. This first attempt helped us evaluate ergonomic affordances of spaces under design.

Our ongoing project now uses a more sophisticated motion-capture infrastructure, Optitrack, with an 18-camera array that can capture: (1) more nuanced movement, (2) interaction between multiple actors, and (3) interaction between an actor and a given object. In addition to capturing full body movement, we are attempting to capture the nuances of hand movements and gestures by measuring figure flexure using a data glove. This will also allow us to capture nuances of interactive behavior (e.g. touching, grasping objects etc.). Through all this, we are developing a library of motion capture data. A wide array of human behavior simulation can then be generated using intelligent agent authoring software such as MassivePrime by drawing from this motion capture library. MassivePrime allows for the creation of AI enabled agents without the need for advanced programming languages. An integral component of these agents are their brain nodes that control their behavior. The agent’s brain nodes also have ‘senses’ such as vision and sound. This allows an agent to interact with its environment in a human-like manner and adapt its behavior based on cues from the environment and other agents. We are mapping data from our motion capture into a set of actions for the programmable agent that can be triggered by its brain nodes. MassivePrime allows for multiple pass simulations where the results of one run can be used as an input for the next run, allowing development of richer behavioral simulations (Fig. 4).

Fig. 4.
figure 4

MassivePrime simulation of crowd behavior during an emergency crisis scenario

4 Current Work and Future Directions

It is now common to integrate smart sensors and displays with building elements. We can now directly interact and manipulate information in various modalities as we navigate through space. This makes it necessary for us to look beyond established approaches for architectural visualization and representation. We need to develop new visualization approaches that help simulate embodied interaction not only with the built elements, but also with the accompanying multimodal information environment through a variety of interfaces. Therefore, the next step in the development of our labs is to further refine our visualization approach so as to conceptualize buildings, the embedded information technology and human-environment interactions holistically. We are adapting and enhancing our visualization tools to facilitate design and evaluation of pervasive environments. We are in the process of integrating our motion capture system with advanced virtual reality tools that can simulate the built environment as well as the information environment. We are also focusing our efforts on developing capabilities to evaluate interactions in these environments both from an ergonomic as well as psychological perspective. Design follows a propose-critique-modify cycle [24] and the critique or evaluation phase is equally important in enhancing the quality of the overall design. We are integrating psychophysiological and eye-tracking tools in addition to our motion capture tools building on strategies laid out in [25] to enhance our measurement capabilities for evaluation. We hope that our efforts will make an important contribution to visualization strategies for design and evaluation of pervasive environments.