Keywords

1 Introduction

Archaeologists began using computers in the 1950s and 1960s, contributing to the subsequent rise of New Archaeology (Cowgill 1967; Gardin 1971). However, only since the turn of the twenty-first century with the emergence of many off-the-shelf software packages have digital technologies played a larger role in shaping archaeological practice and scholarship. When archaeologists began employing Geographic Information Systems (GIS) in the 1980s, GIS had a non-GUI interface requiring users to code in GRASS (Geographic Resources Analysis Support System) with a Unix shell (https://grass.osgeo.org/home/history/). In 1981, ESRI (Environmental Science Research Institute released ARC/INFO, the first commercial GIS, but it also had a non-GUI interface requiring users to code with the ARC Macro Language (AML). In 1992, ESRI released ArcVIEW providing a user interface for GIS that did not require coding. However, wide-spread adoption of GIS did not occur until the early 2000s with the introduction of ArcGIS Desktop in 1999 (ESRI) along with a push to provide low-cost and free educational licenses for the software and greater access via the internet. Post-processual archaeology and the addition of extensions such as Spatial Analyst and 3D Analyst moved digital archaeology beyond data management and mapping to human-centred approaches by affording greater analytical functionality for non-expert users (i.e. coding was no longer an essential skill) (Llobera 2000, 2001, 2003; Lock 2000; Wheatley and Gillings 2000; Wheatley 1993, 1995). By the early 2000s developers began work on a variety of open-source GIS solutions, including Quantum GIS/QGIS and JUMP/OpenJUMP, both released in 2002, and gvSIG and SAGA, both released in 2004. By 2009, many archaeologists had become familiar with the open source QGIS providing greater access to GIS.

In the 1990s a few archaeologists began to use Virtual Reality (VR) (e.g. Barceló et al. 2000; Dingwall et al. 1999; Forte and Siliotti 1997a, b; Reilly 1991). While the purpose of VR is to create interactive 3D reconstructions, in the 1990s and early 2000s archaeologists primarily employed VR to create 3D reconstructions that were viewed as static 2D illustrations or non-interactive animations (Bentkowska-Kafel et al. 2012). In 2005, Unity—a 3D gaming engine—was released making the development of 3D interactive content more accessible. Currently unity works on over 25 platforms and along with rapid software and hardware development of immersive virtual reality (e.g. Oculus Rift, Samsung Gear VR and HTC Vive headsets), archaeologists have increasingly used VR to intentionally create interactive 3D environments that facilitate embodied experiences of archaeological sites and landscapes (Forte and Bonini 2010; Forte and Gallese 2015; Richards-Rissetto et al. 2012, 2014; Rubio-Tamayo et al. 2017). Recently, software and hardware have been released inviting archaeologists to experiment with approaches to enhance embodiment in VR.

In this vein, cyberarchaeology integrates computer science, engineering, and archaeology (Levy et al. 2012) to simulate potential past environments in 3D affording multisensory interaction with data in VR. Cyberarchaeology is therefore leading to new methods and knowledge that are beginning to influence archaeological practice (Forte 2010, 2016b; Forte and Pietroni 2009; Forte and Siliotti 1997a, 1997b; Jones and Levy 2018; Smith and Levy 2014). While archaeologists have carried out many visibility analyses using GIS (e.g. Doyle et al. 2012; Howey and Burg 2017; Kosiba and Bauer 2013; Lake and Woodman 2003; Lambers and Sauerbier 2006; Richards-Rissetto 2010, 2017a, b; Wernke et al. 2017; Wheatley 1995), they are only recently performing acoustic analyses (e.g. Goodwin and Richards-Rissetto 2020; Lake 2013; Primeau 2022; Primeau and Witt 2018; Witt and Primeau 2019; Zalaquett 2010; Zalaquett Rock 2015). Other scientists have explored acoustics using technologies outside of GIS and VR (e.g. Azevedo et al. 2013; Díaz-Andreu et al. 2017; Iannace et al. 2014; Kolar 2017; Liwosz 2018; Loose 2008; Reznikoff 2008; Watson and Keating 1999). Despite these advancements, there are very few computational and experiential multisensory studies of the past; however, archaeologists are increasingly integrating complex heterogeneous datasets into various digital technologies moving us forward in human-centred analyses that are expanding understanding of the past (Jones and Levy 2018).

2 Research Overview

Bringing together cyberarchaeology, geospatial modelling and an immersive embodied experience, we engage in a multisensory study of vision, sound, and movement in archaeological landscapes using GIS and VR. We initially represent the ancient landscape as 2.5D GIS data created using analogue maps and airborne LiDAR (von Schwerin et al. 2016). Next, we integrate these GIS data along with 3D models acquired via terrestrial laser scanning and photogrammetry as well as 3D Computer Aided Design (CAD) models to simulate built and natural components in VR. Using both GIS and VR, we investigate the ancient Maya landscape from a multisensory and multiscalar perspective using computational and embodied approaches to construct spatial narratives of potential past experiences. Computational methods in GIS generate viewsheds and soundsheds and acoustic tools allow us to process sound data from the field to integrate into VR to create multisensory experiences with spatial sound using an immersive headset and touch controllers. Our methods and interpretations employ theory-inspired variables from proxemics (Hall 1966; Moore 2005; Gillings and Wheatley 2020) and semiotics (Buchler 1978; Richards-Rissetto 2010, 2017a, b) to develop a methodological framework in which different scales of representation can be defined to specifically target a variety of sensory perceptions.

The case study simulates the late eighth and early ninth-century landscape of the ancient Maya city of Copán to investigate from a multisensory perspective the role of landscape in facilitating movement, sending messages, influencing social interaction, and structuring cultural events (Richards-Rissetto 2010, 2012, 2017a, b; Richards-Rissetto and Landau 2014). While most sensory analyses focus on either individual buildings or non-urban landscapes, we employ GIS and VR to bring together both built and natural elements of the landscape to explore the combined cultural and ecological impacts on multisensory experience. Combining urban structures with vegetation is particularly important for the Ancient Maya given that they viewed settlements as “populated earth”, or kahkab (Marcus 2000), likely representing Maya cities as urban agrarian places with urban gardens and orchards intermixed with residences (Ashmore 2004; Chase et al. 2016; Isendahl 2012; Isendahl and Smith 2013; Fletcher 2009; Graham 2008).

To investigate the multipurpose roles of vision, sound and movement for the ancient Maya, we generate viewsheds and soundsheds from Stela 12 situated in the southeastern part of Copán. We explore the cultural significance of Stela 12 its physical surroundings by performing two simulations in GIS and VR. Simulation #1 models viewsheds and soundsheds from Stela 12 using an Urban Digital Elevation Model (Urban DEM) generated from LiDAR and pedestrian mapping data (von Schwerin et al. 2016) that includes bare earth (terrain) and archaeological structures without vegetation. Simulation #2 builds on this by incorporating vegetation modelled from paleoenvironmental, botanical, and ethnobotanical data (House 2007; McNeil et al. 2010; Richards-Rissetto et al. 2016). The objectives are threefold: (1) augment the Soundshed Analysis Toolbox computational approach to account for additional ecological variables (beyond temperature, humidity, etc.) to include groundcover such as urban gardens or orchards, maize fields, forests, etc., (2) analyse the impact of vegetation within urban agrarian landscapes on viewsheds and soundsheds, and (3) explore the cultural significance of Stela 12 using a multisensory approach, and more generally the role of synesthetic experience in ancient Maya society (Houston et al. 2006).

3 Theoretical Foundations

Shifting paradigms and new technologies lead to innovative approaches to archaeology. In the 1950s, settlement pattern studies emerged as archaeologists expanded research beyond site-level analysis to regions (Sears 1956; Willey 1953, 1956; Williams 1956). This shift impacted archaeological practice by expanding research questions and methodologies. Archaeologists began to carry out pedestrian surveys, mapping large areas to investigate regional interaction and patterns of land use (Chisholm 1979), and spatial analysis burgeoned. Spatial concepts (often stemming from geography) such as central places, thresholds and ranges became integral, along with quantitative methods (Clarke 1972, 1977; Christaller 1933; Foley 1977; Vita-Finzi and Higgs 1970). In the 1980s, archaeologists adopted Geographic Information Systems (GIS) for data management and as we entered into the 1990s, archaeological uses for GIS expanded to include cost surfaces, resource allocation, predictive modelling, and other computational approaches (Kvamme 1988; Lock et al. 2014; Lock and Stancic 1995; Wheatley and Gillings 2013).

In the 1990s, in part as a backlash to processual archaeology, post-processual or interpretative archaeology emerged, asking questions about human agency, indigenous perspectives, gender and perception (Brady and Ashmore 1999; Conkey and Gero 1997, 2002; Wylie 1992). Phenomenology began to explore human perception in the past (Tilley 1994) serving as a major driver/impetus to creating narratives of human experiences in ancient landscapes (Llobera 2012). For example, archaeologists began to use GIS to investigate the role of vision and movement in the past to understand human agency, social interactions, etc. (Kantner 1997; Wheatley 1995; Wheatley and Gillings 2000). In the early twenty-first century, archaeologists have been developing GIS computational methods to employ viewsheds and cost surfaces to investigate past human experience in new ways (Llobera 2001, 2003, 2007; Llobera et al. 2011; Richards-Rissetto 2010, 2017a, b; Richards-Rissetto et al. 2012; Richards-Rissetto and Landau 2014; Verhagen et al. 2013; Wheatley and Gillings 2000; Whitley and Hicks 2003).

While GIS applications were gaining momentum, Paul Reilly (1991) introduced the concept of virtual archaeology. In 1997, Forte and Siliotti published the first publication exploring the potential of virtual archaeology, and while VR applications have been increasingly utilised in the past two decades (Barceló et al. 2000; Bruno et al. 2010; Champion 2015, 2017; Dakouri-Hild and Frischer 2009; Richards-Rissetto et al. 2012), it is only in the last decade with the growth of Cyberarchaeology that we are witnessing multisensory interaction and analysis in 3D environments affording increased embodiment (Pujol and Champion 2012). Cyberarchaeology crosses archaeology, computer science and engineering (Levy et al. 2012) to simulate “potential past” in a 3D “cyber-environment” (Forte and Siliotti 1997b) affording innovative approaches and leading to new questions and knowledge as well as changing archaeological practice (Forte 2003, 2010, 2016b; Forte and Pietroni 2009; Jones and Levy 2018; Smith and Levy 2014). In regard to landscape archaeology, VR in archaeology has focused primarily on built environments for projects such as Rome Reborn (Dylla et al. 2010), Digital Revival of Cham's Architecture (Guidi et al. 2012), Depicting Çatalhöyükness (Pujol 2017), Ultraset 3D (http://patrimoni.gencat.cat/en/ullastret3D) and Funerals on the Rostra (Saldaña and Johanson 2013), or paleoenvironmental reconstructions with little to no built architecture (Spada et al. 2017). Recently, archaeologists have been bringing together the built and natural components of the landscape in VR environments for more holistic analysis. They are also integrating a variety of digital tools to develop new methodological frameworks that combine computational and experiential approaches leading to new multisensory research (Chalmers and Zányi 2010; Gillings and Goodrick 1996; Richards-Rissetto 2017a).

4 Digital Affordances and Material Culture: GIS and VR

Digital technologies provide certain opportunities to perform, create, and otherwise represent the potential sensory environment of the past in faster and newer ways. In other words, they have affordances, or offer possibilities for specific action(s) by users that may extend beyond their original intended use(s) (Conole and Dyke 2004; Gibson 1979; Norman 1988). In regard to digital, affordances relate to human-computer interaction and computer-mediated communication; for example, affordances of web 1.0 centred on consumers, while the web 2.0 expanded affordance to provide opportunities for participation and collaboration, or producers of information (i.e. user-generated content).

In 1977, James J. Gibson coined the concept of affordance originating from his research on visual perception. Gibson (1979) argues for a direct theory of perception, which means the environment that an animal is situated in encodes meaning that is directly perceived by that animal. In other words, the environment is meaning-laden and sensory engagement extracts that meaning. Affordance is not a static concept, and there is an ongoing debate within psychology about its exact meaning (Heft 1989; Jones 2003; Michaels 2003). Gibson’s own descriptions of affordances were somewhat vague.

As for archaeology, Gillings (2012) argues that spatial technologies such as GIS should focus on experiential affordances rather than attempting to model human perception or sensory modalities. Gillings notes that using affordance, as a framework for GIS-based analyses does not mean anyone is attempting to actually map and analyse affordances. Instead he contends it is more important to generate framing heuristic in order to study relations. As a case study Gillings takes a GIS-based approach to study the placement of early Neolithic monuments. In order to understand the importance of ocean views in the location of these monuments, he explored the question: “Where on the island does the relationship between an active viewer and topographical configuration afford these qualities?” (Gillings 2012, 608). His analysis was primarily based on GIS derived viewsheds.

However, these were not generated to understand sensory perception, instead they were generated in order to investigate the relation between people and landscape (affordance). In other words, Gillings is not visualising an affordance but using GIS to begin the investigation of one. He concludes that GIS has a key part in experiential landscape research but the time for dialogue between GIS and experiential landscape theory is past. Instead of a middle ground, he believes archaeologists should develop new frameworks that take current trends into account to initiate new debates, while also embracing emerging spatial technologies.

In the late 1990s archaeologists began to promote the idea that VR afforded opportunities for embodiment in past places (Forte 2000; Forte and Siliotti 1997a, b). VR encompasses a wide range of technologies (past, present, and future), reconstructions (schematic, photorealistic, human actors, etc.) and affordance differs amongst them (Forte 2016b). Since the turn of the twenty-first century, debate has ensued about the technological affordances for embodiment (Champion 2011; Forte and Bonini 2010; Forte and Gallese 2015; Forte 2016a). Technically, embodiment refers to the representation or expression of something in tangible or visible form; however, archaeologists, and others, contend that more than simply being visible is necessary to create embodiment. Instead, for a person to feel embodied requires a multisensory experience allowing for real-time feedback and interaction (Forte 2016b). Additionally, questions as to the degree of actual embodiment in VR environments persist, in great part related to multisensorial capabilities and interactive experiences derived from real-time feedback (Champion 2011; Pujol and Champion 2012; Forte 2016b).

Cyberarchaeology offers a potential for enriching embodiment using VR (Forte 2016b); and GIS and VR allow for specific affordances, two of which we discuss here. One affordance is the ability to utilise technology for the development of various interpretations (Opitz et al. 2022); the other is the ability for technology to draw upon various sources of evidence and to invite researchers to incorporate evidence that they may not have otherwise considered (Forte 2016b, 117). Both these affordances are present in this study as the authors incorporate various evidences for Maya political and ritual performance: not only does the technology allow for the exploration of the importance of performance within a particular case study, it also requires an explicit consideration of the ephemeral aspects of performance. These aspects, such as sights, sounds and movement, are inherently difficult for archaeologists to study, but evidence for them is present within the archaeological record (Katz 2017, 2018; Sanchez 2007). The incorporation of these phenomena within GIS and VR allows researchers to more fully understand and interpret the experience of previous performances, constructing a phenomenological account based more firmly upon evidence (Gillings 2011; Witt and Primeau 2019).

5 Case Study: Ancient Maya

5.1 Perspectives of Sight, Sound, and Movement

The Classic Maya regarded sound, sight and other senses as tangible yet imperceptible phenomena that were important to ancient Maya interaction and experience. Archaeological and ethnographic evidence support this interpretation (Garza et al. 2008; Katz 2018; Sanchez 2007; Vogt 1976). Vision was proactive to the ancient Maya; it affected and changed the world around it. Observers validated and authorised what they saw, the gaze of Classic Maya rulers was crucial for moral validation and projection of power (Houston et al. 2006). This concept is evident in sculptures, which were viewed as extensions of the person represented and capable of affecting the space around them. For this reason, when monumental architecture was destroyed, the eyes and other areas of the face of sculptures were often intentionally mutilated (Clancy 1999; Freidel 1986; Just 2005). This understanding of Maya perceptions suggests that elevation and vision are important aspects of city planning, and that there is an association between visibility and authority amongst the ancient Maya (Hammond and Tourtellot 1999; Doyle et al. 2012; Landau 2015; Richards-Rissetto 2010).

In Mesoamerica, sight and sound were conceptually linked (King and Sánchez Santiago 2011) and sound was equally important as a vision to the ancient Maya. A diverse array of instruments is depicted in ancient Maya artwork and uncovered in archaeological excavations (Katz 2017, 2018; Kerr et al. 1998; Sanchez 2007). They range from percussion instruments like drums and rattles to wind instruments such as flutes. The sound of instruments was used to enhance visual spectacles, mimic wildlife, direct dancers, and awaken supernatural entities (Houston et al. 2006; Katz 2018; Looper 2009; Ramos and Medina 2014; Zalaquett Rock 2015; Zalaquett et al. 2014).

Like sight and sound, movement was also important to the ancient Maya, especially in the context of ritual processions (Keller 2009; Palka 2014; Reese-Taylor 2002; Sanchez 2007). Ritual processions are focused on movement from one locale to another over the course of a ceremony. Movement in a ritual procession is interrupted by stops to engage in ritual acts at certain stations. These processions were a specialised rite performed for a specific reason such as the symbolic establishment of land ownership or promotion of social cohesion. Ancient Maya cities provided a locale for public performances such as ritual processions that mimic the cosmos and the actions of the supernatural (Ashmore and Sabloff 2002; Coggins 1980; Sanchez 2007; Takeshi 2006; Tate 2019).

5.2 Ancient Maya City of Copán

Today Copán is a UNESCO World Heritage Site in Honduras and the city’s landscape is dramatically different from the past. During the fluorescence of the Maya Kingdom of Copán (427–820 CE), a series of sixteen dynastic kings ruled over the city and its surroundings (Fash 2001). For over four hundred years, the city’s population and infrastructure grew. At its peak about 22,000–25,000 people lived in the city (Webster 2005) manufacturing and trading goods, growing crops, maintaining households, attending ceremonial events as well as carrying out political and administrative duties. By the late eighth century, the main civic-ceremonial precinct (Principal/Main Group) had undergone many construction phases resulting in a sequence of large temples, palaces, and freestanding monuments (Fash 2001). Not only did the Main Group and urban core experience growth and change, but surrounding suburbs (sub-communities) developed and grew (Fig. 1).

Fig. 1
A map indicates the archaeological structures and sub-communities. The dots and lines exhibit the sites of the structure on the map.

Map of archaeological structures and sub-communities, Copán, Honduras

Scholars have argued that the ancient Maya viewed their cities as kahkab, a concept combining kah (earth) and kab (community). This concept implies that both built and natural features were integral to Maya landscapes, including cities (Marcus 2000). Supporting this idea is the hypothesis that they practiced urban agrarianism (Isendahl 2012; Isendahl and Smith 2013). Households typically had garden orchards with cultivated plants and suburbs outside the urban core also likely grew patches of maize, along with beans and squash (Fedick 1996; Ford and Nigh 2009; Graham 2008; Killion 1992). While Copán had two sacbe (causeways) leading to the Main Group, they only extended about 0.75 km to the east and west. Thus, most movement outside the Main Group was along informal (travel-worn) paths, and while plaza floors were cleared of vegetation, and some plastered, spaces beyond the plazas would have had varying degrees of vegetation. Thus vegetation along with topography is essential to consider in analyses of sight, sound, and movement through ancient Maya landscapes.

In our simulations we compare the potential impact of vegetation on acoustics during a ritual performance held at a stela outside Copán’s urban core. The case study is Stela 12, erected by Ruler 12 to celebrate the 9.11.0.0.0 period (Maya Long Count notation) ending on Oct. 9, 652 CE (Morley 1920) outside the city’s urban core in an area with low-density settlement, and yet archaeologists believe it played an important role in the city’s past (Fig. 2). Several hypotheses exist about Stela 12’s significance: (1) it was a sun marker, along with Stela 10 that identified the onset of planting season (Morley 1920), (2) it formed part of a line-of-sight communication system for relaying smoke signals (Fash 2001), (3) it served as a territorial marker for the Copán polity (Fash 1983, 2001), (4) it was a locus for ritual and community events, and (5) it served as a destination for ritual processions (Carter 2010; Richards-Rissetto 2010).

Fig. 2
A map of Stela 12. A triangle icon illustrates the location of the archaeological site on the map.

Location of Stela 12, Copán (right)

Our intent (at this time) is not to interrogate each of these hypotheses, but rather begin to gather new (multisensory) data to deepen understanding of ancient Maya processions. Iconographic, archaeological, ethnohistoric, and ethnographic evidence indicates that the ancient Maya practiced three main types of processions: Circumambulation, Core-Periphery, and Base to Summit (Keller 2009; Palka 2014; Reese-Taylor 2002; Vogt 1969). Each type of employed elements of performance involving stimulating the senses to achieve specific sociopolitical and ideological outcomes. Our research seeks to contribute to scholarship on ancient Maya processions and more broadly, to offer empirical and experiential approaches for archaeologists to explore the role of sound and vision in performance within past urban agrarian (and other) landscapes.

6 Materials (Data Sources)

6.1 Data Acquisition and Integration

Using GIS and VR, we integrate various datasets including survey, excavation, ethnographic data, architectural drawings and LiDAR (Fash and Long 1983; Hohmann 1995; Hohmann and Vogrin 1982; Richards-Rissetto 2010, 2013; von Schwerin et al. 2016; Wisdom 1940) with paleoenvironmental, botanical and ethnobotanical data (McNeil 2009, 2010, 2012; McNeil et al. 2010; Richards-Rissetto et al. 2016) to simulate the ancient landscape. Ambient sounds captured in the field along with reproduced sound sources such as a conch shell, whistles and flutes (Goodwin 2018; Katz 2017) provide acoustical data.

We create simulations of Copán in the mid to late eighth century during the reign of Yax Pasah, Copán’s 16th and final dynastic ruler. The data collection and reconstruction process began with the built environment and terrain and recently turned to adding vegetation to refine simulations of ancient Maya urban agrarian landscapes (Richards-Rissetto et al. 2016).

Architecture and Terrain: Previous publications discuss the datasets and process employed to create GIS and 3D data we use to simulate ancient Copán (Goodwin 2018; Goodwin and Richards-Rissetto 2020; Richards-Rissetto 2010, 2013; Richards-Rissetto and Landau 2014; Richards-Rissetto et al. 2016; von Schwerin et al. 2016). Our archaeological settlement data came from excavations, photogrammetric mapping, pedestrian surveys and airborne LiDAR (Fash and Long 1983; Hohmann and Vogrin 1982; von Schwerin et al. 2016). Initially paper maps (scale 1:2000) were georeferenced and manually digitised to create vector GIS data (shapefiles) of archaeological structures and contour lines (Richards-Rissetto 2010). In 2013 the MayaArch3D Project commissioned airborne LiDAR for 25 km2 surrounding Copán’s main civic-ceremonial complex (Fig. 3) (von Schwerin et al. 2016). 3D points were post-processed to identify archaeological features and compare to existing analogue-derived GIS data as well as generate a 0.5 m Digital Elevation Model (bare earth with archaeological mounds) and a 0.5 m Digital Terrain Model (with archaeological mounds removed). The DTM was integrated with rasterized structures assigned height using a trigonometric formula (Richards-Rissetto 2013) to create an Urban DEM to simulate the city (scape) of Copán in the mid to late eighth century with terrain but without vegetation that we use for computational analysis in GIS.

Fig. 3
Two elevation maps of Copan's main civic-ceremonial complex. The colors indicate the elevation of the surface on the map.

Digital elevation model (top); urban digital elevation model (bottom)

Vegetation: Vegetation is classified according to five physiographic zones, low terrace, intermountain pocket, high terrace, foothills and floodplain, designated for Copán in the 1980s (Baudez et al. 1983), as well as two additional zones, water and urban. Vegetation data originate from multiple sources. Palynological (pollen and spore) data from pond sediment cores provide plant species classifications with associated AMS dates (McNeil et al. 2010). Using these raw data, we aggregated the data into specific time periods (e.g. Preclassic, Early Classic, and Late Classic) to isolate plant types for landscape simulation. In future research we will use percentages of pollen and spores classified as arboreal, herb or aquatic to begin to determine vegetation composition for the mid to late eighth century (McNeil et al. 2010). Additionally, we will employ ethnographic, ethnobotanical, and archaeological studies on land use to provide context for interpreting and integrating plant, terrain, and settlement data for the simulation. A key resource is The Maya Ethnobotanical Report—a quantitative ecological study for Copán Archaeological Park with data on species density, frequency, and dominance—and information on indigenous plant uses (House 2007).

The GIS data as well as airborne LiDAR and photogrammetric data are the building blocks for the current VR environment. All GIS data and 3D models are georeferenced (i.e. they have real-world spatial reference). We generated the terrain from airborne LiDAR, (with a decimated/optimised resolution) (von Schwerin et al. 2016) and the archaeological structures, monuments, and architectural sculpture from four sources: shapefiles, photogrammetry, SketchUP and 3D Studio Max (Goodwin and Richards-Rissetto 2020; Lyons 2016; Remondino et al. 2009; Richards-Rissetto 2010, 2013; Richards-Rissetto and Plessing 2015; von Schwerin 2011). Figure 4 illustrates the photogrammetric data integrated into their spatial surroundings comprising extruded GIS models as 3D models generated using the software SketchUP and 3D Studio Max (www.youtube.com/watch?v=XEXZJHNpn4c; www.youtube.com/watch?v=B9U3y0CbVh0). The vegetation data comprise initial simulations using a range of temperate and tropical plants rendered in the gaming engine Unity using GIS footprints (Day and Richards-Rissetto 2016). Future 3D and VR modelling will provide more accurate plant types, communities and spatial distribution for VR simulations (see the MayaCityBuilder Project for a sample of vegetation data http://mayacitybuilder.org/?page_id=1219).

Fig. 4
Five 3-D reconstruction images of the archaeological site using different 3-D rendering software.

VR environment integrating multiple data types-GIS, 3D StudioMax, Photogrammetry, and LiDAR

Using this VR environment, we integrated captured ambient sounds of spaces at Copan with 3D and VR models for a more immersive experience. The recorded ambient sounds were captured through on-site recordings utilising a handheld Zoom H4n stereo recorder with adjustable microphones that are adaptable to numerous fieldwork situations. Once recorded, sounds were utilised to create ambience in the VR environment with the DearVR plugin. The plugin enables a greater sense of position and reality through spatial sound. For instance, with the DearVR plugin the audibility of bird calls in the VR environment are relative to the user position. DearVR has several features that contribute to spatial sound. The first is occlusion, which is the manipulation of sound waves upon being fully blocked by a surface or feature. The second is obstruction, which like occlusion may result in a partial blocking of sound waves, however, some sound may be reflected around the obstruction altering sound source loudness and reverberation. Obstruction is generally applicable only to objects near the listener. Third is distance correction, which can increase or decrease the perceived distance of a sound source relative to the user. Together, the GIS and VR data allow us to create multiple simulations using different technologies to explore multisensory experience in the past.

7 Methods

7.1 Soundshed Analysis Toolbox

In 2016, the Soundshed Analysis Tool, beta version 0.9.2, was developed as part of an Archaeoacoustics Toolbox written in the Python programming language for ArcGIS 10.3 (Primeau 2022; Primeau and Witt 2018; Witt and Primeau 2019). The Soundshed Analysis Tool models how sound spreads throughout a landscape and is based upon “SPreAD,” developed to calculate the effects of noise in US National Forests (Harrison et al. 1980), and “SPreAD-GIS,” an open source script that first converted SPreAD into a GIS tool (Reed et al. 2009, 2010). The tool operates by creating a raster layer of the resultant A-weighted Decibel (dBA) Sound Pressure Level (SPL) observed by a listener when a sound is made outdoors (i.e. in a “free-field”). Several factors that contribute to losses or “attenuation” of SPL are calculated by acoustical formulae which include: atmospheric absorption loss as described in ANSI 1.26 (1995), topographic loss as described in ISO 9613-2 (1996), and barrier attenuation as described by Maekawa’s optical diffraction theory (Maekawa 1968; Primeau 2022; Primeau and Witt 2018; Witt and Primeau 2019).

The Soundshed Analysis tool was initially applied to study archaeoacoustics in Chaco Canyon, New Mexico, using a variety of acoustical and environmental inputs (see Table 1) and employing 1.5 m LiDAR data to provide base modelling elevations (Primeau and Witt 2018; Witt and Primeau 2019). These studies captured the way that sound spreads through the landscape, recreating potential experiences; however, as noted they did not accurately reflect potential impacts of old, no longer extant structures on audibility. When the Soundshed Analysis Tool was first used to explore two case studies at Copán’s principal group, this limitation was overcome through the addition of Urban Digital Elevation Models which, as noted previously, recreate the original structure dimensions, most notably their heights (Goodwin et al. 2018a, b). Soundsheds created using the Urban DEM could therefore be compared chronologically to reveal how the construction or demolition of a structure could alter the sonic environment.

Table 1 Soundshed analysis tool v0.9.2 input variables, from Witt and Primeau (2019)

However, anthropogenic modification is not limited to the built environment. Whereas Chaco Canyon presents a fairly homogeneous semi-arid desert landscape of scrub and grasses, Copán presents a landscape of ecotones with both vegetative and urban modifications. These differences revealed further opportunity to enhance and modify options within the Soundshed Analysis Toolbox. The Soundshed Analysis—Variable Cover tool was therefore developed to incorporate vegetation attenuation and includes the optional assignment of multiple ambient SPLs based on cover type: for example, one ambient SPL in a dense semi-tropical forest, and another in garden orchards. While the utilisation of reconstructed vegetation models is a source of debate (e.g. Cummings and Whittle 2003; Lyon et al. 1977; Peng et al. 2014), land cover and vegetation may have impacted the experience of past soundscapes. Recent reconstructions (e.g. Goodwin and Richards-Rissetto 2020; Guth 2009; Llobera 2007) support the inclusion of vegetation in analyses to produce a full range of potential reconstructions, particularly in study areas where past ecosystems and flora have persisted, essentially unchanged.).

The Soundshed Analysis—Variable Cover tool currently models attenuation based on four general categories of vegetation. These cover type categories correspond to formulae used by the US Forest Service (Harrison et al. 1980; Reed et al. 2009, 2010) and were further informed by the work of Aylor (1972). Category A consists of open or cleared areas (approximately 80% cleared or greater). These areas may include open water, barren soils, open urban spaces, or low grasses, crops and shrubs (<0.5 m tall). Category A areas may include trees or other tall vegetation but these are sparse and do not result in major breaks in line of sight between the sound source and observer; structures or other features are visible on the landscape beyond. Category B areas correspond to denser (approximately 50–80% cleared) vegetation between 1.3 and 2.0 m tall (e.g. thick maize). This vegetation is not much taller than a person, however, the average person would have a difficulty seeing through it as visibility is restricted by dense stems. Category C and D areas both consist of dense forested areas (<50% cleared) that a person cannot see through. Type C areas consist of dense coniferous vegetation or “old growth” forests where attenuation is primarily due to tree trunks causing breaks in the line of sight. Type D areas consist of full growth trees with a high density of branches, leaves, and undergrowth. These areas provide the greatest amount of attenuation.

7.2 Project Specific Environmental Inputs

For this analysis the seven physiographic zones within the Copán study area were divided into vegetation attenuation categories as follows: Category A included water, urban areas, low terrace, intermountain pocket and high terrace; and Category B included foothills and floodplains. No category C or D areas were modelled based on the data resolution.

To analyse the impact of vegetation on sight and sound, both the Soundshed Analysis Tool and the Variable Cover tool were provided the same environmental input values, including the use of a single ambient Sound Pressure Level of 31 dB. The models assume the conch shell trumpet is being played on a warm, humid day (26 °C, 90% humidity) during the wet season (May–November) in the early morning. Therefore, the only difference between the models is that the Variable Cover tool includes formulae to calculate vegetation attenuation based on the categories assigned to the physiographic zones (Fig. 5).

Fig. 5
Two windows of Soundshed analysis and variable cover. The input fields are sound source location and height, frequency, sound level of source, measurement distance, temperature, relative humidity, ambient sound pressure level, and elevation dataset.

Inputs for Stela 12: (top) Soundshed analysis tool; (bottom) Soundshed variable cover tool

8 Results and Analysis

8.1 GIS Results and Interpretation

Stela 12 and its modelled soundsheds are located entirely within the foothills’ physiographic zone. The presence of vegetation in the area surrounding Stela 12 results in a degree of sound attenuation that would have ranged between the two mapped possibilities created by the soundshed tools; therefore, the experience of the listener should be interpreted as a reality somewhere between these extremes. The soundshed created with the Soundshed Analysis tool which encompasses a larger and more multi-directional area (Fig. 6) illustrates how far the conch could be heard without intervening vegetation to muffle its sound (i.e. no vegetation attenuation). The soundshed created with the Variable Cover tool (Fig. 7) presents a much more conservative auditory experience showing the impact of thicker vegetation in this area (i.e. with vegetation attenuation). Taking these results together, we can determine that those in the immediate vicinity of the conch, as well as those located to the north and northeast of the sound would have heard the instrument being played regardless of whether the vegetation was low, or taller and thicker, up to the thickness of dense shrubbery. The audibility of the conch to those in the south and southwest would likely have been impacted by the intervening vegetation.

Fig. 6
A Soundshed analysis map of a shell from the Stela 12 site. The highest sound pressure level concentrates at the center region of the shell.

Soundshed of conch shell trumpet at Stela 12, Copán using soundshed analysis tool [vegetation attenuation not included]

Fig. 7
A sound analysis map of a conch shell from Stela 12. The highest sound pressure level situates in the middle of the conch shell.

Soundshed of conch shell trumpet at Stela 12, Copán using the soundshed analysis—variable cover tool [vegetation attenuation included]

Combining the soundshed modeling with viewshed analysis (Fig. 8), we developed a VR simulation. This simulation (Figs. 9 and 10) illustrates the experience of walking through the project area while a conch shell trumpet is being played. A video is available online at the link below that demonstrates the VR simulation and the spatial sound capabilities of an immersive headset and touch controllers (https://youtu.be/qHUnxNn4C3g).

Fig. 8
A viewshed map of the Stela 12 site in Copan Honduras. The legend consists of Stela or structure, visible area, structures, and elevation of low and high.

Viewshed from Stela 12, Copán, Honduras

Fig. 9
A virtual reality simulation image from the Stela 12. The image exhibits a column, trees in the background, and a wheatfield.

Screenshot from the VR simulation as the user approaches Stela 12 with the city’s urban core in the background

Fig. 10
A virtual reality simulation image of the Stela 12 exhibits a wide grassland.

Screenshot from the VR simulation after the user has moved past Stela 12 and towards the civic core and residentials groups visible in the background

9 Conclusions and Future Direction

This study represents a preliminary exploration of the impacts of vegetation on the spread of sound. The Copan Archaeological Project (PAC 1) delineated five physiographic zones (Baudez et al. 1983) that provide an established baseline for vegetation reconstruction at Copan, further refinement of the landscape reconstruction will be necessary to achieve the most accurate modelling results. Category C and D vegetation, as described above, will be added to a high resolution (up to 1 m) land cover reconstruction raster, capturing the vegetative variability within the physiographic zones. Viewsheds will also be updated to reflect the irregular patches of vegetation.

As illustrated by the above discussion, vegetation played an important part in both everyday life and special events in the past. Often considered only on a surface level, if at all, vegetation impacted visibility, sound and movement across the landscape. When considered in the context of processions and other political or ritual events, vegetation limited the reach, and thus the audience, of those events. It remains to be seen if those events were modified to take this effect into account, but as the events themselves were meant to be experienced by an audience—indeed, gain their meaning from the fact that an audience is present—it seems likely that this would have been a key consideration (Inomata and Coben 2006).

Technologies such as GIS and VR allow researchers to explore visibility, audibility and movement, and how they may have affected the performance of events. Rather than recreating the paths of events themselves, the technologies provide affordances and open up the ability to question the past, allowing archaeologists and other researchers to engage sensorially with the past, albeit grounded on the evidence of material culture. This provides for a human-centred form of analysis, rather than one dictated by the specificities of local geography and resource availability.

However, going beyond this specific affordance, digital technologies lead researchers to other questions: if GIS and VR are being used to develop accurate models of the past, in order to allow for a sensorial engagement with that past, then what information needs to be included in these models? Beyond the presence or absence of structures and surrounding terrain, or visibility, or even audibility of events, what other aspects of the past need to be considered? Is there evidence within the archaeological record to provide clues to answering this question? Digital technologies guide us to both of these questions and the research needed to answer them.