1 Introduction

In this paper we review digital technologies that can be used to study what the experiences of past peoples might have been. We focus on the use of immersive virtual reality (VR) systems to frame hypotheses about the visual and auditory experiences of past individuals, based on available archeological evidence. These reconstructions of past places and landscapes are often focused on visual data. We argue that we should move beyond this ocularcentric focus by integrating sound and other modalities into VR. However, even those that emphasize sound in archaeology—as in archaeoacoustics (Scarre & Lawson, 2006; Diaz-Andreu & Mattioli, 2015; Suárez et al., 2016)—often retain a unimodal emphasis that limits how much we can understand of past peoples’ sensory experience. We argue that it is important to emphasize the importance of seeing and hearing at the same time (i.e. multi-modal sensory integration) in phenomenological archaeology. This is possible using immersive virtual reality systems that can engage users with both sight and sound simultaneously.

Phenomenological approaches to archaeology originate as critical engagements with archaeology and its objectivist, positivist assumptions (Brück, 2005). Objectivism and positivism are manifested in a range of empirical methods that archaeologists rely on in order to conduct their research. These range widely from field excavations and surveys of sites, to absolute dating of artifacts, to computational reconstructions using GIS. However, we see no reason to reject these methods on phenomenological grounds (cf. Moyes, Chapter “The Life and Afterlife of Phenomenology in Archaeological Theory and Practice”, this volume). Empirical methods and objective data are necessary for archaeological research to be conducted because the interpretations archaeologists make about the past are in many ways based on what is observable. Empirical methods and objective data enable present and future archaeologists to make what they excavate, survey, and observe in the field presentable to others in support of their interpretations about the past. Due to the often destructive nature of excavations, surveys, and other methods, empirical methods and objective approaches ensure that later researchers can continue to study a site or feature that has already been excavated or surveyed.

These objective methods for reconstructing the past are usefully supplemented by phenomenological and embodied descriptions of sites, landscapes, and artifacts. These range from identifying visibility between places and spaces on the landscape (Gillings, 2012) as well as what could be seen, heard (Primeau & Witt, 2018), touched (MacGregor, 1999) or otherwise engaged with through the senses. It may also assist in understanding the meaning of use of an artifact or landscape from the perspective of a particular studied culture, such as how people responded to them emotionally (Houston et al., 2006). The classic examples come from British pre-history and are exemplified by Christopher Tilley’s solitary strolls through the landscape (Tilley, 1994). Subsequent work was in some ways critical of Tilley but added many more dimensions of phenomenological description (Thomas, 2001). However, subjective experience for its part is subject to distortion, storytelling, and biases. Thus, following (Moyes, Chapter “The Life and Afterlife of Phenomenology in Archaeological Theory and Practice”, this volume) and others, we believe that a combination of approaches is the best way to reconstruct past experience.

In this paper we specifically focus on possibilities of phenomenological reconstruction afforded by emerging technologies that allow a present day subject to experience a scientifically reconstructed multi-modal experience of what life might have been like at a past site. We focus on three technologies: GIS, 3d Mapping, and VR/AR. Within the last category, of VR/AR, we focus on four examples that demonstrate how these technologies can advance an understanding of experience in the past through VR and AR. (1) Lercari and Busacca use VR to produce immersive visualizations of the archaeological site known Catalhöyuk in order to interpret and better understand the complex spatial relationships among its structures (N. Lercari & Busacca, 2020). (2) Graham et al. show how AR can enhance the in-situ experience of archaeological landscapes through the introduction of sound, which may offer a more immersive experience when integrated with vision (Graham et al., 2019) (3) The Mayacitybuilder project illustrates how sound in combination with vision can enhance VR and move us closer towards a more synesthetic experience of the past (Goodwin & Richards-Rissetto, 2020). (4) Barreau et al. show how the simulation of an eighteenth century merchant ship is significantly more immersive when visual elements are enhanced by the introduction of sonic elements and a soundscape (Barreau et al., 2015).

The next generation of immersive VR/AR is evident in the powerful WAVE which has the potential to integrate sight and sound in an affective and—experience (Lercari et al., 2016). Digital representation methods offer a path forward for enhancing experiences of artifacts and landscapes by allowing for rich and immersive engagement with objects (Di Franco et al., 2015). Though it is important to recognize that they have a lack of authenticity as they impose their creators particular interpretation of the world (Gillings, 2005). For instance, by expanding our body’s sensorial capabilities, digital reconstructions can still produce sensorial experiences and act as an extension of our bodies, allowing for a better understanding of the sensory experience of past people (Hamilakis, 2014). We argue that archaeological visualizations are important, but we need to move beyond the ocularcentrism typical of digital reconstructions through the addition of sound and by consideration of hearing. However, rather than create a dichotomy between hearing and seeing, we need to emphasize the importance of seeing and hearing at the same time. The emphasis on one sensory modality at the expense of others is not an effective means to conduct research. Many archaeological reconstructions of past places and landscapes are silent, and there is room to improve upon on this absence through immersive virtual reality systems that can engage users with both sight and sound. We also contend that the introduction of virtual reality into archaeological phenomenology has the potential to counter criticism of ocularcentrism by incorporation of multiple sensory modalities within reconstructions beyond just the visual.

2 Survey of Methods Used to Reconstruct Past Experiences

Three methodologies currently use digital technology to reconstruct past sensory experiences: (1) Geographic Information Systems (GIS); (2) 3D mapping technology, (3) Virtual and Augmented Reality (VR and AR). After a brief review of each technology, we consider how these methods have been critiqued on phenomenological grounds, then describe how phenomenological descriptions can actually supplement work in these areas, rather than undermining or replacing traditional methodologies. We especially emphasize the potential of VR and AR as methods which facilitate multi-sensory integration in reconstructed past experience.

GIS is the use of integrated computer systems to analyze landscapes and other artifacts to create a comprehensive picture of an environment. It can be used to analyze spatial patterns of movement, visibility, and even sound propagation; it can also be used to analyze environmental factors such as vegetation (Kosiba & Bauer, 2013; Gillings, 2012; Primeau & Witt, 2018; Landau, 2015). Patterns identified with a GIS can be then be compared between sites in order to find regularities in sensory effects within sites (Brück, 2005). For example, viewsheds can analyze what can or cannot be seen between two fixed points on the landscape (Howey & Brouwer, 2017). Another example is least cost path (LCP) modeling, which determines the optimal path between two points on the landscape through an understanding of topography and features that potentially impede movement.

A phenomenological critique of GIS is that it assumes an objectivist, Cartesian model of space. Additionally, a peoples’ experience of landscape is not solely determined by material culture. Also, GIS struggles to deal with imprecision and uncertainty, it is perhaps better suited to physical measurements rather than the intangible social world (Bodenhamer et al., 2010).

While we agree that GIS has these features, it would be hard for archaeologists to replace well-established GIS-based spatial methodologies, which can integrate, analyze, and visualize vast amounts of data from differing formats. In archaeology, the ability to integrate and analyze data is fundamental for the discovery of spatial patterns hidden within the landscape. For this reason, Brück (2005) thinks GIS and 3D mapping cannot be dismissed because of their value in identifying symbolic patterns on the landscape. GIS can be viewed as a supplement to rather than a replacement for phenomenology. Geospatial techniques can be used to assist phenomenological approaches that emphasize the creation of space through navigation. Therefore phenomenological approaches are grounded in observation and description of the archaeologist’s experiences of places, as experience is facilitated by our universal physiology enabling a better understanding of past peoples experience (Primeau & Witt, 2018).

A second digital technology used to reconstruct past experience is 3D mapping technology. Typically a visualization of 3D models on a bidimensional screen, 3D mapping technologies can supplement the abstract perspective of 2D maps, allowing researchers to understand movement, visibility and acoustics within an environment, but with the advantage of a three-dimensional framework, which better resembles the physical characteristics of a real environment. 3D mapping methods like structure-from-motion digital photogrammetry and laser scanning are effective for replicating the visual elements of an object such as size and shape (Eve, 2018). Digital methodologies for the creation of 3D models not only enable new means of interpreting artifacts but also enable the study of artifacts and things as multidimensional rather than mere images (Papadopoulos et al., 2019). Conventional representations obfuscate the three-dimensional qualities of artifacts that enable embodied, multisensorial experience of those artifacts. Digital methods can foreground three-dimensional properties of artifacts as well as evoke embodied, multisensorial affective experience.

Here critics have charged that the emphasis of 3d mapping is ocularcentric. A 3D model in a bidimensional screen continue to focus strictly on the visual. There is a propensity to focus on what things looked like, rather than how they were used and experienced. Similarly, 3D printed material fails to replicate the actual texture, color, and weight of the original. The smells, sounds, and feel of an object are lost in these 3D reconstructions.Stuart Eve (2018) argues that 2D and 3D representations leave out vast amounts of multi-sensory information about objects.

However, as with GIS, 3D mapping retains its importance as a means of providing objective data that can be used to supplement and constrain phenomenological descriptions. 3D mapping produces objective representations that can then be subjectively observed and experienced. They can be enriched by phenomenological descriptions, and phenomenological descriptions can be strengthened by interaction with a 3d map. Neither approach fully captures the richness of artifacts, sites, and landscapes but instead they supplement one another in a mutually constitutive manner. Objective 3D maps would be sterile and lifeless without the supplementation of phenomenological approaches, and phenomenological approaches would be merely subjective reports without the support of objective data such as 3D mapping.

A final pair of technologies that can be used to reconstruct past experiences are Virtual Reality (VR) and Augmented Reality (AR). Virtual reality produces an immersive experience using virtual objects inside a virtual environment (such as video games), while augmented reality produces virtual objects inside a physical environment (like Snapchat lenses). Both make use of virtual objects, but augmented reality adds overlays these virtual objects on a real physical environment. Virtual environments and objects can be engaged with through web browser plugins such as Cortona3D, game engines like Unity3D, or even viewed in stereo 3D in a CAVE (Sanders, 2014). Cave Automated Virtual Environments (CAVE) are immersive VR systems that utilize a number of screens to produce a stereo images (Knabb et al., 2014). Polarized glasses worn by the user produce a 3D stereoscopic perspective that mimics how people see in 3D in the real world. A CAVE system can display a number of different data types ranging from laser scanned models to LiDAR point clouds.

Augmented Reality enhances a real world environment through the addition of computer generated sensory input like sound and visuals (Di Franco et al., 2015). It is enhancing one’s perspective of reality with specific virtual items rather than attempting to reproduce a completely virtual reality. Augment Reality offers new means of visualization, data analysis, and human engagement with material objects. It offers a new means of interaction and engagement that help in creating rich and immersive experiences. As a means of presenting the past it improves the sensorial experience people have with the past. These are also known as mixed-reality, which exist in a continuum from reality itself to AR to VR or it can bea hybrid combination of virtual reality (virtual objects in a virtual world) and augmented reality (virtual objects in a physical world), where users can physically interact with virtual objects.

A critique of mixed reality systems is that they produce experiences that fail to correspond to any existing reality, e.g. “a landscape that has never actually existed (Cummings, 2010)”. They often represent a person’s subjective mental interpretation of place or object rather than an objective recreation. In addition it is not yet possible to produce representations or reconstructions that offer people the same sensory experience as the real thing (Galeazzi, 2018). However VR and AR provide essential value (Gillings, 2005). We should accept their creative aspects but find ways to integrate them into phenomenology and other established methods of archaeology. Integration of VR and AR will require critical examination of where they fit or do not fit into already established methods and theory. This may require revamping of existing methods and theory to better accommodate new and emerging technology like VR and AR. Like the introduction of photography which required modifications in methods and theory to and the development of new knowledge accommodate its use, VR creates a particular way of seeing upon the viewer that requires culturally specific knowledge to understand. It is important to consider nontraditional ways of landscape representation that offer new approaches to experiencing and dwelling in landscapes (Cummings, 2010).

VR, AR, as well as GIS and 3D mapping can provide immersive engagement with the past through reconstructions that engage multiple senses rather than just one. Sight and sound are important to perception (Díaz-Andreu et al., 2017), and we cannot consider the senses as separate from each other. In order to understand the experiences of people in the past, digital technology can be used to integrate multiple senses together into a more immersive experience that will help in understanding how people in the past engaged with the world through their senses. A framework of multi-sensorial experience expands the world of material things and can better connect things to experience (Hamilakis, 2014). Even digital reconstructions can expand as well as extend our bodies sensorial capabilities to better understand sensory experiences of past people. New technologies are one such approach as they offer an alternative to traditional text-based narratives. The more experiences one enables through representations, the more possibility of alternative interpretations emerges.

3 Examples of Virtual and Augmented Reality Research in Archaeology

In this section we describe a number of examples that show how VR and AR systems can be used to produced rich simulations that can facilitate phenomenological reconstruction. VR and AR can produce immersive simulations because they can integrate multiple sensory modalities into a singular virtual environment. VR and AR systems also integrate multiple datasets, finds and artifacts for reconstructions that account for change over time in a site. Lercari and Busacca (2020) show how VR and AR can be utilized to provide reliable reconstructions of archaeological sites that are grounded in datasets produced with scientific rigor. This means that VR and AR can be used to understand the experiences of past people within a space, because the reconstructions are based on accurate archaeological data rather than conjecture or guesswork. These reconstructions reflect what past people in those spaces experienced, and how that experience changed over time. Graham et al. (2019) illustrate how AR can utilize sound and the auditory perceptions of those sounds as a means of producing an more immersive experience of archaeological sites, and landscapes that go beyond unimodal reconstructions based solely on visual information. Goodwin and Richards-Rissetto (2020) demonstrate how VR and other digital approaches can enhance our understanding of sight and sound in the past through novel approaches to data analysis, integration, visualization, and interaction. Virtual environments act as a catalyst for interpretation of past peoples’ experience, and interactive 3D visualizations embedded with sounds of the past allow for multi-sensory representation. Shemek et al. (2018) show how immersive VR technology enables meaningful embodied engagment with virtual reconstructions of cultural heritage that has been altered over time. They show how a multi-sensory interactive envinorment can integrate multiple multimodal data sources within a single virtaul envinorment for an immersive and affective experience of a past place, in this case the Reinassance era studiolo (study) of Isabella d’Este’s. Barreau et al. (2015) brings together historical documents, and archaeological knolwedge to produce a scale 3D model of an eighteenth century ship. Not only does this 3D models offer an immersive visual experience, but the authors integrate a soundscape into the reconstruction that help in understanding what life was like onboard the ship when it actually sailed the ocean.

Lercari and Busacca (2020) utilize immersive VR to create archaeological visualizations, that assist in interpretation of behavior and a better understanding of site chronology. The 3D reconstructions they produced of the Neolithic site of Çatalhöyük offer a multi-temporal look at the sequence of construction over time. These reconstructions provide visual representations of a complex archaeological record and enable a better understanding of the history of buildings at Çatalhöyük. Lercari and Busacca produce archaeological visualizations that successfully reconstruct multiple phases of construction at the site in order to better understand the links between the different areas of the site. Their interactive virtual reconstructions visualize patterns of continuity and change evident in the archaeological features excavated at the site. This fits into the cyber-archaeology paradigm proposed by Forte (2016) by utilizing archaeological visualizations to help contextualize subtle spatial continuity and history making at Çatalhöyük. These visualizations assist in stimulating both discussion and interpretation by enabling the visualization of multiple strata, finds, and datasets. Connections that were not identifiable in a standard 2D plan or photograph are now made visible in 3D reconstructions utilizing the game engine Unity 3D or through interpretative infographics. Visualizations assist in the rendering and reconstruction of past places. They provide a clear representation of a complex construction sequence at Çatalhöyük by showing how building practices are replicated or modified across the entirety of the site’s stratigraphy.

Immersive virtual reality can also incorporate hearing whenever possible using sonifications and auralizations. The benefits of incorporating other sensory modalities is also evident in Augmented Reality, according to Graham et al. (2019).

Graham et al. (2019) explore how seeing the past goes beyond just the sense of sight (Graham et al., 2019). They consider how AR can help bring the past to life through interaction with the present. However, they call attention to a key issue with many current AR approaches that are ocularcentric and exclude or underutilize sound. Graham et al. (2019) argue that the existing visual focused AR approaches create a break in presence that cancels out an immersion AR offers. Therefore, in order to prevent a break in immersion, focusing on hearing the past is more effective and affective for immersion than a singular focus on sight. Their work illustrates how past worlds can often be better heard than seen, but unfortunately sound and hearing are underexplored in comparison to sight and seeing. While the introduction of past sounds into the present produces an anachronistic space that could potentially disrupt an immersive experience, at the same time the use of these sounds in AR prompts more in-depth cognitive examination by the person experiencing the sounds.

Historical sounds evoke emotional response in people that alter their understanding and memory of past events. Sound plays a major role in how memories are recollected and how they can potentially be altered through new experiences. Graham et al. (2019) demonstrate the importance of paying attention to sound in the present as much as the past. In order to truly understand the experience of sound in the past, we have to consider how people in the present experience those same sounds. This will require additional research that challenges vision as the primary sense in our research and reconstructions of the past. The three projects listed below demonstrate the importance of going beyond vision as our primary sense in reconstructions using immersive virtual reality.

The MayaCityBuilder project uses an immersive VR headset to incorporate vision with sound to facilitate an embodied experience in order to examine potential locations of ritual performance, and determine the placement of participants in these events (Goodwin & Richards-Rissetto, 2020). GIS and 3D technology were utilized to measure sound propagation and reverberation the urban core of ancient Copán as a case study with the goal of creating a synesthetic experience in ancient Maya cities. The Ancient Maya culture and architecture provide an excellent opportunity to investigate the potential of GIS and VR modeling to better understand of the built environment in producing multi-sensory experiences (Houston et al., 2006).

This case study from Copan illustrates the powerful role digital technologies can play in understanding Classic Maya views of the body, sensations, and experiences. While the exact experiences of the ancient Maya are impossible to replicate, we can investigate the variables that affected their sensory experience to begin to move forward in our phenomenological understanding of the past. These variables are evident in places, architecture, and material culture of the ancient Maya that archaeologists’ study. By understanding these variables the reconstructions archaeologists produce can be made even more affective and immersive.

One of the projects that is part of IDEA (Isabella d’Este Archive) is called the Virtual Studiolo (Shemek et al., 2018). This project produced an immersive VR reconstruction of the many rooms that make up the Italian Reissuance- era Palazzo Ducale of Mantua, which housed Isabella d’Este’s courtly collection of instruments, antiquities, and artwork. Extensive cultural heritage is made accessible in an immersive experience through museum or CAVE spaces Photogrammetry, 3D technology, and digital animation are used to create an immersive VR experience by reuniting a collection of artifacts that are dispersed across museums. The creators allowed users to interact with the Virtual Studiolo in both analytical and creative ways by calling attention to scholarly understanding of the studiolo, yet also allowing for interaction, experimentation and other forms of engagement to create a meaningful learning experience that is a mix of research and game.

In addition to visual elements, acoustic elements were recreated according to the historical record. Though they recognize their project is a hypothetical and engaging remix, it alters and deviates from the original but in a way that promotes new ways of understanding d’Este and the broader Renaissance culture she was a part of. A 3D virtual reconstruction can be connected to datasets of documents and vice versa. While it is not an exact reconstruction of the original, the immersive experience it offers enables one to test a variety of hypothesis’s about display and curation during the Renaissance.

Another project demonstrates how researchers produced an immersive VR reconstruction of Le Boullogne an eighteenth century French merchant ship (Barreau et al., 2015). In order to understand daily life and experience aboard the vessel, Barreau et al. utilized historical documents, naval architecture plans, and archaeological data to produce a 1:1 scale 3D model of the ship. This model was then employed within a VR simulation of a ship sailing on the ocean. Beyond the animated buoyancy of waves, there was also an emphasis on reconstructing a sonic environment that mixed spatial audio such as birds flying by with a global soundscape of ocean and wind noises. Through this immersive visualization life on board the vessel can be better understood by historians.

Static and strictly visual VR reconstructions should no longer be the end point for archaeologists. Mixed reality allows reconstructions that can visualize the past in a manner that is both immersive and interactive creating new experiences that offer new ways to interpret and analyze data. Barreau et al. (2015) argue that immersion within a 1:1 scale interactive environment enables better evaluation of the role of material culture in past societies. They have produced a model and simulation that enables the use of historical sources in a virtual reality environment. The architecture of the ship, and the spaces within it can be assessed by researchers own movement and perception, therefore creating a more immersive expensive overall.

4 Future Directions

A full multi-sensory archaeological record is impossible. We are coming closer to that ideal than ever before, but there is still a great deal more to be done. Researchers need to acquire further knowledge of tools and technologies, sensory capture techniques, and ways of providing necessary computing infrastructure to store and share sensory digital objects (Eve, 2018). Standards and policies need to be developed for sharing and production of multi-sensory information. Virtual Reality has the potential to produce powerful visualizations but in order to reconstruct past phenomenology this method must go beyond visual experience. The examples in this paper have demonstrated that immersive VR works most effectively when it integrates multiple lines of sensory data that are not strictly visual. Immersive VR will be more engaging and affective when it is a multi-modal experience rather than an experience of just one sensory modality. It enables a means of engagement that manifests the complexity, multidimensionality, and multi-sensorial nature of the material world. The past is not meant to be merely seen or heard, but rather seen and heard (and felt, etc.). An appropriate balance and integration between the various senses should be the goal of future reconstructions.

One area where the potential of VR is especially evident is in the UC Merced WAVE (Wide Area Visualization Environment) system, an immersive VR setup that can be used to integrate available information from a variety of sources to produce new knowledge about cultural heritage and past phenomenology. The WAVE is made up of 20 55″ OLED TVs forming a halfpipe (Nicola Lercari et al., 2016), which allows for immersive virtual simulations of “the 3D, topology, volume, texture, materials, and context of a monument or heritage site” which “fosters collaborative analysis and interpretation when used in a virtual reality platform” (Lercari et al. p. 4). These powerful visualizations and virtual reality devices can be integrated with analytical tools for spatial analysis. Future work will focus on integrating sound with immersive VR. The WAVE will be able integrate multiple datasets into a singular immersive environment that is an ideal system for demonstration. 3D imagery can be viewed without distortion from any point in the room (Knabb et al., 2014).

Augmented reality systems enable a richer combination of real world and digital elements than VR alone does. Virtual models can be merged together into create an integrated experience that brings users away from the computer and almost directly into the field. This means that a computational approach can be combined with a phenomenological one of embodied experience in the field (Eve, 2012).

To conclude, an embodied reconstruction of past experience can be more than just a visual or even an audio-visual experience. It can include multiple sensory modalities that are more reflective of how people past and present engage with the world around them. People do not experience the world solely through sight or hearing, but rather through a multi-modal integration of all the senses. More of these sensory modalities need to be considered and integrated when creating AR experiences. Archaeologists must do more than just describe what they study, they must also represent it. Yet these representations are primarily visual and/or verbal. In order to enhance peoples’ understanding and engagement with the past, digital tools and methods such as VR or AR are needed in order to represent it in a manner that engages multiple senses. Using digital tools and methods we can extend our own sensory capabilities, and develop an ability to understand the past through multi-modal and affective engagement with it.