Keywords

1 Introduction

The concept of paradata has the potential to provide the basis for intellectual transparency (Sköld et al., 2022) by capturing information about the processes of understanding and interpretation leading to scholarly outputs. At the same time, it is a concept that has haunted (Dawson and Reilly, chapter ‘Towards Embodied Paradata. A Diffractive Art/Archaeology Approach’ in this volume) the field of 3D heritage visualisation since the inception of the London Charter (LC) in 2006. The LC, a set of principles to ensure the technical and intellectual transparency of 3D visualisation, came as a response to over a decade of intense debates about 3D scholarship in archaeology and cultural heritage. Advancements in computational hardware and rendering algorithms, and the ability to create photorealistic computer-generated images (CGI) that were indistinguishable from real life, made several scholars argue that 3D was becoming ‘the downright misleading’ (Miller & Richards, 1995) and ‘a double-edged sword’ (Eiteljorg, 2000). Therefore, solutions had to be found to safeguard scholars and their scholarship by means of ensuring that visualisations were not only providing engaging images for public consumption but could also serve and be accepted by a scholarly audience.

Another challenge that 3D scholarship had to face was the absence of standards and stable systems for documenting and presenting processes and outputs. Until recently, major changes in technologies and proprietary systems had as a result the loss of substantial amounts of 3D scholarship (Papadopoulos & Schreibman, 2019), at least in its original form. Considering these issues, the discussion about intellectual transparency and the documentation of processes and interpretations was a timely and worthwhile pursuit. The LC formalised this discussion under a series of principles that aimed at providing a foundation on which different communities of practice could develop a series of implementable rules. Ultimately, the LC aimed at legitimising a particular type of work which although had many practitioners, it did not have many researchers advocating for its value as an autonomous piece of scholarship.

The current hype for Virtual Reality and the Metaverse has brought—after a long hiatus—3D graphics to the fore. Also, a renewed interest has been evident in heritage research and dissemination with several projects problematising issues of sustainability, quality, intellectual transparency, and infrastructures.Footnote 1 In addition, there have been significant attempts to better integrate 3D into traditional scholarship; for example, journals such as Studies in Digital Heritage and Digital Applications in Archaeology and Cultural Heritage have been accepting and peer-reviewing 3D models as part of journal articles, while publishers, such as Stanford University Press and Michigan University Press, have been publishing digital monographs that integrate 3D into their narratives (see, e.g., Opitz et al., 2018; Sullivan, 2020). Such developments have created a more open environment for 3D scholarship, which was not the case when the LC was conceived. The European Commission (2021) is in the process of developing through a series of dedicated Horizon Europe funding calls a Common European Data Space for cultural heritage which will also involve the 3D digitisation of all at risk and most visited cultural heritage. This, together with the increasing digitisation of heritage—mostly afforded by the changes in web technology and the democratisation of digitisation tools and methods—will inevitably create more challenges for 3D scholarship.

Is such mass digitisation a threat to the steps that have been already taken towards a more sustainable 3D scholarship or is it an opportunity to re-evaluate its role and meaning in our field? This may also provide us with a clean slate to critically discuss how should 3D models be presented on the Web; what kind of information should be captured about their making; and how should this be made available to the intended audience. None of these are insignificant undertakings and such decisions have an impact on understanding the outputs of this scholarship, especially when there is no access to the people, hardware, and software that developed and defined them. Even if such access was possible, the conscious and unconscious decisions that human operators made but also technological authority, i.e. decisions that software and hardware made for us, need to be considered. However, such decisions often remain blackboxed. Devising systems to capture as much information as possible in the hope that we can ultimately (re)construct every step of the process is not only impossible (Schreibman & Papadopoulos, 2019) but also futile. What purpose would such a complete documentation serve? Although transparency has essentially become the dominant narrative around 3D scholarship, one can only wonder if 3D scholars need to be more transparent than others. Are there fundamental and unique differences in their practice? Are processes and decisions objective and linear or are they defined by unknown variables? Ultimately, is the concept of paradata still relevant, and if so, how can we ensure that it aligns with current and future developments in (3D) digital scholarship?

Since 3D visualisations can be generated in different ways and by different means (Huvila, 2018), and each of those bears unique technological and epistemological challenges, this chapter will only problematise the modelling process, i.e. the development of a 3D model by using a computer graphics software. It will argue that before reaching the stage of formalising the documentation of processes and decisions, we ought to understand that 3D (re)construction is ‘unbearably complex’ (Huvila, 2013) since it involves variable and dialectic processes based on a series of often neglected perceptual, physiological, and technical factors. Since technological authority is only one of the facets that need to be examined, we cannot and should not treat 3D (re)constructions as merely technical or technological processes. The premise of this chapter is that in order to critically evaluate if paradata still have a role to play in such processes, if they need to be redefined, and/or if they ultimately become counterintuitive, elusive, or illusionary (Reilly et al., 2021), we should first problematise what comes before a 3D (re)construction, i.e. the complex processes of decision-making and knowledge production in archaeological practice.

To approach this question, the chapter is structured as follows. First, it provides an origin story of paradata in 3D heritage visualisation by exploring various charters, guidelines, and their implementations. Then, it turns into archaeological practice and examines the concept of three-dimensionality, particularly focusing on how it is perceived and documented. Lastly, it returns to the concept of paradata and discusses its relevance. The chapter concludes by arguing that paradata need to be reconsidered within an ecosystem that recognises and rewards non-typical scholarship. It proposes a ‘leap of faith’ to provide intellectual rigour and facilitate the transition to more contemporary conceptions of scholarship while escaping from any paradigms that are based on the notion of reproducibility.

2 Paradata: Tracing the Origins and Paving the Futures

Mick Couper coined the term paradata in 1988 to differentiate the data automatically generated by computer-assisted interviewing from those produced by humans in the survey process. As he outlines in a review article on the birth and development of the term (Couper, 2017), technological developments in the field, e.g. the growth of Web surveys and computer-based systems for survey research, led to the expansion of the concept to also include the description of contextual circumstances beyond the survey itself, e.g. observations, which may prove useful in the management, understanding, and evaluation of the collected data.

The term paradata follows a similar path in heritage visualisation. With the increasing computational capacity in the 2000s and the move from schematic, illustration-like 3D visualisations to photorealistic ones, researchers started questioning the validity and consequently the scholarly value of such representations, while exploring ways that the intellectual rigour that went into their creation, including accuracy, uncertainty, and interpretation, can be communicated. As a result, two separate solution-oriented directions were developed: a computational and a conceptual one. The two approaches did not develop in parallel; computational approaches were developed well before the term paradata was used in 3D heritage visualisation, while the conceptual one started developing with the symposium on Making 3D Visual Research Outcomes Transparent held at the British Academy and the subsequent expert seminar in King’s College London in 2006, out of which the LC was born. These two paths started converging when projects started developing more practice-oriented solutions to exemplify how the principles described in the LC (and later the Seville Charter) could be implemented.

2.1 Computational Approaches to Paradata

Already in the mid-1990s, scholars implemented technological solutions to demonstrate uncertainty and make clearer the hypothetical nature of three-dimensional representations. For example, Roberts and Ryan (1997) developed a VRML prototype of a parametric visualisation for the Roman theatre in Canterbury in which the change of different parameters resulted in a change in the 3D representation, thus allowing users to be cognizant of the ambiguity of the preserved evidence. Also, several scholars followed a stylistic approach to highlighting uncertainty. For example, Eiteljorg (2000) used visual degradation and different levels of transparency, while Zuk et al. (2005) implemented various visual cues to highlight temporal uncertainty. Similarly, Roussou and Drettakis (2003) suggested the deployment of non-photorealism for rendering 3D representations in more artistic and expressive styles. Other scholars developed approaches based on computer science and mathematics, for example by suggesting a probabilistic model based on fuzzy logic in which the reliability of 3D representation was given a numerical evaluation (Niccolucci & Hermon, 2010) or a pseudocolour (Sifniotis et al., 2006). A significant amount of 3D scholarship also focused on high fidelity and predictive rendering (see, e.g., Devlin et al., 2003; Happa et al., 2012; Papadopoulos & Earl, 2014) to produce representations that are validated by the simulation of physical properties (e.g. light). Lastly, several projects dealt with uncertainty by developing alternative reconstructions either by means of manual or procedural modelling (Papadopoulos & Earl, 2009; Piccoli, 2016).Footnote 2

2.2 Conceptual Approaches to Paradata: Charters and Principles

The conceptual approach to paradata was born in 2006. It is not entirely clear when the term was first used; however, there is consensus that it was Drew Baker from the King’s Visualisation Lab who introduced it in relation to heritage visualisation before it became almost synonymous to the LC. In his paper Towards Transparency in Visualisation Based Research, Baker (2007) argued that there is a ‘parallel stream of ancillary information to metadata’ and that:

is essential to understanding and building successful and transparent research hypotheses and conclusions, particularly in areas where data is questionable, incomplete or conflicting, and explores how this can be applied to the process of creating three dimensional computer visualisation for research.

In the same paper, and its expanded version in which the term paradata features in the title (Baker, 2012), the author uses the ‘Data, Information, Knowledge, Wisdom model’ from the field of information science to argue that understanding increases when more connections are created among data and sources, interpretations, and decisions, essentially moving from data—being the lowest level in the knowledge chain—to information, knowledge, and ultimately wisdom. At the same time, however, Baker (2012, 171) criticises the model, by arguing that in the process of metamorphosising data to wisdom, the knowledge chain may get contaminated by both data and processes involved in the move to higher levels of understanding. For this reason, paradata become essential, even though they pose certain challenges particularly in terms of quality, quantity, granularity, time, and the sanitisation of the creative process.

The term paradata has become almost synonymous to the LC which was conceived in 2006 as a result of an expert seminar at King’s College London organised in the context of the project Making Space (run by the King’s Visualisation Lab at King’s College London and funded by the UK’s Arts & Humanities Research Council) in collaboration with the VAST-Lab and EPOCH: The European Network of Excellence on ICT Applications to Cultural Heritage. The first version of the LC entitled ‘The London Charter for the Use of Three-dimensional Visualisation in the Research and Communication of Cultural Heritage’ was published on the 14th of June 2006, while the second version, entitled ‘The London Charter for the Computer-based Visualisation of Cultural Heritage’, was published 3 years later, on the 7th February 2009, also accompanied by ‘A New Introduction’ (Denard, 2012). The second version of the LC aimed at broadening its scope by incorporating not only 3D visualisations but also any other type of computer-based visualisation, including replicas of museum artefacts, as well as those that aim to (re)construct or evoke heritage but do not come from the heritage field, e.g. those in entertainment.

Both versions of the LC described its premises by developing a series of principles rather than concrete aims and methodologies for its implementation. Overall, the LC attempted to create a framework and propose the principles under which visualisation practitioners should operate to achieve intellectual rigour. It is out of the scope of this chapter to delve into the various LC principles; however, it is worth highlighting that although the first version was more open to the selection of the most appropriate means of communication according to the intended message, audience, and circumstances, the second version used much firmer language, and recommendations became granular requirements.Footnote 3 For example, Principle 4.5 (2006) suggests that ‘it may be necessary to disseminate documentation of the interpretative decisions made in the course of a 3d visualisation process and, as far as is practicable, the sources used’ [emphasis added]; Principle 4.6 (2006) adds that ‘transparency information requirements may change as levels and sophistication of understanding of particular 3D visualisation methods rise, and will vary from community to community’ [emphasis added]. In contrast to the above, Principle 4.5 (2009) suggests that ‘A complete list of research sources used and their provenance should be disseminated’ [emphasis added] and Principle 4.6 (2009) adds that ‘Documentation of the evaluative, analytical, deductive, interpretative and creative decisions made in the course of computer-based visualisation should be disseminated in such a way that the relationship between research sources, implicit knowledge, explicit reasoning, and visualisation-based outcomes can be understood.’

The LC gave birth to other charters and attempts to create frameworks that will implement its principles and standardise paradata documentation. The Seville Charter (SC), for example, was conceived in the context of The Spanish Society of Virtual Archaeology as well as the International Forum of Virtual Archaeology to develop an implementation guide particularly in relation to issues faced in archaeological 3D visualisation (Grande & Lopez-Menchero, 2011; Lopez-Menchero & Grande, 2011). The beginnings of the SC can be traced back to the establishment of ARQUEOLÓGICA 2.0: The International Meeting of Archaeology and Graphic Informatics Heritage and Innovation in 2009, and the session ‘Reflections about the London Charter’ followed by a plenary assembly on the ‘Foundations of Virtual Archaeology’. The first draft of the SC was presented a year later at the second meeting of ARQUEOLÓGICA 2.0. The SC consists of a series of principles some of which overlap with those of the LC. For example, they highlight the need for interdisciplinary collaboration (Principle 1) as well as the use of digital technologies to complement and not replace existing tools and methodologies (Principle 3). The SC ultimately emphasises authenticity as a ‘permanent operational concept’ (Principle 4) according to which alternative interpretations and different levels of accuracy are presented.Footnote 4 This can be achieved through historical rigour (Principle 5), scientific transparency (Principle 7), and the provision of training that will generate more professionals able to conduct and evaluate such scholarship (Principle 8). Although the SC is presented as a means to implement the LC in the context of archaeological heritage, it does not provide specific guidelines or ways to standardise the application of its principles.

2.3 Implementations and Extensions of Paradata Charters and Principles

Several projects have tried to showcase how the London and Seville Charters can be applied to specific case studies. Georgiou and Hermon (2011), for example, use the 3D visualisation of the Hellenistic-Roman theatre in Paphos to provide a list of research sources (Principle 3) and propose ways that Principles 4, 5, and 6 could be implemented, e.g. by developing a reliability chart, applying an XML schema for describing the metadata of the sources that informed the (re)construction, and superimposing the 3D model over the actual remains of the theatre. Hermon and Niccolucci (2018), on the other hand, used the case of the Christ Antiphonitis Church in Kyrenia, Cyprus, to discuss its digital documentation and the virtual recomposition of its frescoes according to the LC principles. However, as the authors admit in their conclusion, objects have features that the principles of the LC cannot capture, also arguing that scientific analyses ‘are no less deceptive than a pretty, but undocumented visualization’ (p. 45). Similar projects have also been produced in relation to the SC (see, e.g., Almagro Vidal et al., 2011).

Apart from projects that explicitly follow the London and Seville principles, there are also those that have attempted to extend them by developing new methodological frameworks. For example, Pletinckx (2007) developed the Interpretation Management tool that provides a guide consisting of five steps which mostly focus on LC’s Principle 3: research sources, aiming at documenting the correlations among sources and the resulted 3D model to achieve ‘scholarly transparency’ (p. 5). Using the example of the Saint Saviour church in Ename, Belgium, the author explains the steps that need to be followed for a transparent 3D visualisation (pp. 27–32); however, documenting and linking each source in the proposed manner is not a minor undertaking.

Carrillo Gea et al. (2013) used the LC and the SC as a basis to propose a model for requirements engineering in software development for digital archaeology, while Apollonio and Giovannini (2015), using as a case study the Porta Aurea in Ravenna, developed a rather complex paradata documentation workflow to standardise the capturing of the modelling process and the sources that informed the (re)construction. Grellert et al. (2019) developed ‘Sciedoc: The Reconstruction Argumentation Method’, a web-based tool for the documentation of decisions in the form of interrelationships among (re)constructions, arguments, and sources. Demetrescu and Ferdani (2021), on the other hand, developed the ‘Extended Matrix’ by using the underlying principles of the Harris Matrix (Harris, 1989), which was invented in 1973 to describe the physical and temporal relationships between stratigraphic units in archaeological excavations. The Extended Matrix is based on the principle of standardisation by making use of a graph database and a five-step protocol: data collection, analysis, reconstruction, representation, and publication to describe the relationship between the archaeological evidence and the 3D (re)construction. The complexity of the system and the resources required for its implementation may be the reasons why this paradigm has not been embraced by the 3D visualisation community. Lastly, several projects have been developing over the years metadata models and ontologies for the documentation of 3D heritage (see, e.g., Kuroczynski, 2017; for an overview, see Börjesson et al., 2021); however, since their focus is more on standards and sustainability than paradata, their examination is beyond the scope of this chapter.

2.4 Conceptual and Computational Paradata: Looking Back and Looking Forward

This section discussed two approaches to paradata, the conceptual, the beginning of which coincides with the conception of the LC, and the computational, which started more than a decade before the development of the LC. Computational approaches to transparency and uncertainty were developed because of the need to rethink how data uncertainty is communicated to end-users; and, although one could argue that such computational approaches are not a perfect fit for the concept of paradata since they do not document context and decisions, their approach is still based on the fundamental paradata premise, i.e. to communicate intentions, hypotheses, decisions, and interpretations (Börjesson et al., 2021, p. 195). Even though this way of doing paradata may not be as explicit as, for example, a paragraph that describes variables and decisions or data entered in fields with controlled vocabularies, and may not provide the space to develop an argument within the knowledge space of a 3D model (Hoppe, 2001; Schreibman & Papadopoulos, 2019), such computational approaches fulfil the intention to communicate context and decisions at a (visual or other) level that the researcher thought would be appropriate for their audience.

Even though the LC was seen as the Messiah that came to solve the issues that 3D heritage visualisation had faced until then, and undoubtedly shaped later discussions in the field, it has not helped the field to progress. Several projects were developed on the premises of the LC; however, to make it implementable, they followed the path of standardisation, attempting to turn this into another Dublin Core or CIDOC-CRM data model. However, as Sample (2011) argues, paradata is ‘…so flawed, so imperfect that it actually tells us more than compliant, well-structured metadata does.’ The downside of the LC and its extensions or implementations is that its principles are based on a seemingly linear process of visualisation in which direct links can be made between data and 3D models. As this chapter demonstrates, tracing roots, tracks, and connections in knowledge production is utterly problematic, especially since various perceptual, physiological, and technical factors, as well as their connections and outcomes, can be rarely identified in a 3D visualisation. The value of approaching knowledge production by documenting research sources in the form of lists and direct relations is debatable, particularly because such documentation would only assert an ostensibly objective method and consequently testify the epistemic authority of the creator.

The next section will lay the foundation for arguing that the concept of paradata in heritage visualisation needs to be revisited. It will do this by problematising the process of 3D (re)construction in archaeology, particularly considering the different aspects of excavating, perceiving, and documenting three-dimensionality.

3 Three-Dimensionality and Knowledge Production in Archaeology

This chapter considers that 3D visualisation is a research project carried out by a hybrid scholar, e.g. a digital archaeologist, who has both domain expertise and highly specialised technical skills. Such a person will produce the 3D visualisation based on their study of relevant materials, including those that may have been produced during an excavation, bibliographic and ethnographic research, the examination of material produced in other research projects, and even discussions with archaeologists and other specialists. Considering that in a 3D visualisation project, one of the first materials we examine are those produced during an excavation, the following sections will explore the perception and documentation of three-dimensionality in relation to excavation practice.

3.1 Excavating the Paradoxes of Archaeology

Archaeology is almost synonymous with excavation (Tilley, 1989, 275); it is a transformative process with ambiguous and paradoxical meaning (Lucas, 2001) that influences the way we understand the past (Edgeworth, 2011). On the one hand, excavation means the recovery of past remains, whereas on the other, to understand and materialise these remains, their context and coherence is shattered, and a new material reality is produced; recording techniques are used to immortalise excavation phases (Bateman, 2006, 68), i.e. to produce a record,Footnote 5 which will allow further examination of the evidence. Archaeology is also based on another paradox. Although the real world is three-dimensional, conventional recording mechanisms flatten archaeological reality into a two-dimensional production. Therefore, 3D visualisation specialists rely on the products of this paradox to produce three-dimensional understandings of the past. This transformation of evidence from one form to another is problematic, since we lack the tools, knowledge, and even awareness, to understand the transformations that lead to both a loss and inflow of data. Therefore, in order to critically assess this paradox, we should problematise the various neglected factors that affect the recorded evidence and invalidate the utopian term ‘objective record’.

3.2 Perceiving Three-Dimensionality

All objects have a certain morphology; however, to understand the morphology of an object we need to examine both the components that comprise morphology and any contextual elements that influence how these are structured and perceived. More specifically, objects’ morphology should be considered both at a micro level, i.e. the fine structure (e.g. colour and texture), and at a macro level, i.e. the gross structure (i.e. geometry and shape). In addition, the processing and construction of information about the real world is based on the principles of three-dimensional vision in coordination with the rest of our sensorium, our situated activities, and embodied practices (Thomas, 1993; Tilley, 1994; Tilley & Bennett, 2004, 2008). However, morphology should also be considered along with contextual or external elements that further define it, such as the light and the angle of view.

Objects and spaces in the world are three-dimensional. However, their optical image formed upon our retina is two-dimensional. This means that our visual system is responsible for transforming this flat image into a three-dimensional representation by using a series of monocular (perceived by the operation of one eye) and binocular (perceived by the operation of both eyes) cues. We can see in three dimensions because of our retinal disparity, i.e. the use of our eyes located at a different position in our head (stereopsis), which provides the information needed by the brain to calculate depth. Convergence and accommodation (Helmholtz, 1856) are used together with stereopsis to focus a scene on the retina. When movement is involved, motion parallax, a movement-produced cue related to the motion of the observer, facilitates depth perception also leading to the accretion and deletion of objects as we move relative to them (Gibson et al., 1959; Rogers & Graham, 1979).

The perception of distance and object size also depends on a series of monocular cues. For example, perspective is one of the most known and well-understood cues, as it is based on a simple principle; the object that is closer to the eyes appears larger, whereas the further the object, the smaller its retinal image will be. Based on the principles of perspective there are also other cues that influence depth perception; these include linear (Saunders & Backus, 2006) and atmospheric perspective (O’Shea et al., 1994), texture gradient (Gibson, 1950, 77–94), relative and familiar size (Hochberg & Hochberg, 1952; Hochberg & McAlister, 1955), relative height (Dunn et al., 1965; Epstein, 1966), and interposition/occlusion (Chapanis & McCleary, 1953). However, some sources of depth perception provide more reliable signs of depth than others (Guibal & Dresp, 2002; Hillis et al., 2004; Jacobs, 2002).

Our sight functions in coordination with light. We can see because particles of light bounce on objects and surfaces, then reach our eyes, and in turn this information is deciphered in the brain (Tarr et al., 1998; Wade, 1999, 9–25). The initial processing of light patterns takes place at the retina of our eye which is layered with cone photoreceptors, sensitive to wavelengths of red (R), green (G), and blue (B) colour (Kaiser & Boynton, 1996). Light, in a combination with our existing knowledge of the world, also generates two phenomena that enable the perception of objects’ morphology and spatial relationships: (1) shading, i.e. the variation of light’s intensity on different surfaces, generated by light coming from a particular angle and reflected off surfaces in a particular way (Kleffner & Ramachandran, 1992; Ramachandran, 1988), and (2) shadowing, i.e. the way that shadows are cast when an object blocks the path of light onto another one (Cavanagh & Leclerc, 1989; Mamassian et al., 1998). Although our brain uses shadows and shading to extract information to enhance depth perception, research suggests (Ho et al., 2006, 645–646) that observers can make errors regarding surfaces’ roughness.

The position of the sun, the clouds in the sky, and the haze of the atmosphere make light behave in different ways and consequently affect the perception of objects illuminated under these changing conditions. Objects’ reflectance and transmittance properties also affect light’s behaviour. Furthermore, the three variables of light, i.e. intensity, distribution, and color, greatly vary depending on the source of illumination, thus further affecting how object morphology is perceived. Ashley (2008) undertook systematic vision testing under different lighting conditions in the excavation of Çatalhöyük, demonstrating that people perceive environments depending on several internal and external dynamics, therefore making evident the need for a viewer-centred archaeology.

The perception of three-dimensional space is a multimodal production since objects stimulate all the sensory organs of the human body. This process, however, is not linear since sensory systems are triggered differently depending on objects’ properties. For example, we have learnt to associate texture with tactility (Klatzky & Lederman, 1987; Lederman & Klatzky, 1987; Taylor et al., 1973); however, the initial information about objects’ texture is extracted from the visual system, which then directs the other perceptual mechanisms to enhance objects’ surface perception (Heller, 1982; Landy & Graham, 2004). In addition, situated activities and embodied practices, experiences (Charest, 2009), memories (Casey, 2000; Jones, 2007, 1–26), and the emotional and motivational state of the observer significantly affect the way that reality is perceived. In other words, three-dimensional space is an amalgamation of visual learning and intuition (Gibson, 1950, 10–16).

3.3 Flattening the Three-Dimensional World into Two-Dimensional Records

Conventional recording methods, such as text, drawing, and photography, depict three-dimensionality with a series of conventions based on established and to a great extent blackboxed practices (Latour, 1999; Lucas, 2012, 239). These attempt to separate the subjective from the objective (Barker, 1993, 163; Yarrow, 2003, 72) and to ensure that any biases can be identified (Andrews et al., 2000, 526). Although this chapter takes as a premise the normative mode of translation in archaeological practice, i.e. from three to two dimensions, it does not assume that this is always the case. An increasing number of projects deploy 3D methods of documentation, while others (see, e.g., Dawson and Reilly, chapter ‘Towards Embodied Paradata. A Diffractive Art/Archaeology Approach’ in this volume; Reilly et al., 2021) have been exploring multimodal translations, e.g. sound, to problematise the nature of the archaeological record. This chapter, however, does not aim at addressing these separately since it is believed that regardless of the medium used for capturing information, the factors that affect processes and outcomes, including technological authority, individual choices, and sensory perception, overlap.

3.3.1 Three-Dimensionality in Text

The most common recording method in archaeological practice is text (Hodder, 1989). The objective—subjective polarity of processualists and post-processualists—gave birth to different methods for recording an excavation in textual form. Since it was thought that written records, and especially these in the form of descriptive narrative, cannot express the excavation as a neutral and scientific record, Single Context Recording (Westman, 1994, §1.2) and Harris Matrices (Harris, 1989) were employed, partially replacing discursive field diaries. The predefined forms and detailed guidance that these provided attempted to ensure that results retain their neutrality regardless of the agents of excavation and their actions (Edgeworth, 2003).

Textual sources can provide a wide range of information regarding the perception of three-dimensionality; for example, in notebooks, where descriptive narrative is mainly used, the identification of colour and texture depends on an individual’s observation and free description. Inventions such as the Munsell Color Chart (Munsell, 1905, 1912) provide some standardisation, but many conditions must be met; for example, the readings should be taken under natural light, on a sunny day, and the soil should be moist. Such parameters confirm that ‘The probability of having a perfect matching is less than one in one hundred’ (Munsell Soil Color Charts, 1994, 1). Goodwin (2000) examined the process of defining the colour of soil in excavations suggesting that it is not only a mental task but also a situated activity which involves physical tools and embodied practices, and thus people perceive and describe colours differently. Similar problems arise when describing texture. Although there are flowcharts that help in the identification of the texture of soils and sediments by finger testing, this is also a subjective process determined by the individuals who record the evidence.

3.3.2 Three-Dimensionality in Photography

Photography was adopted by archaeology soon after its invention, as it was believed that in that way any subjectivity could be overcome by becoming the memories of the past transformed in the excavation (Locatelli et al., 2011, 329). A number of factors invalidate the claim that photography produces an objective pictorial record compared to other illustrative methods (Conlon, 1973, 55; Ivins 1953, 137). For example, technical parameters, such as lens quality, the format, and processing. affect how reality is captured. Cameras are also inherently limited in distinguishing subtle colour/tone changes, while poor exposure latitude, i.e. the range between the lightest and darkest parts, should also be considered (Hester et al., 1997, 166). Colour capturing also depends on the type and sensitivity of sensors and also varies depending on the reproduction medium, e.g. a computer monitor or a printer. The relative position of the photographer, the angle of view, and the distance from the subject also have an impact on the understanding of a captured scene.

Photographs do not objectively capture, but they possess an interpretive role which derives from the different kinds of gaze ingrained in the photograph and accrued from its context (Lutz & Collins, 2003). In archaeology, photographs are used out of context, along with other images and text focusing on specific aspects; therefore, they are to a certain extent manipulated to represent in a seemingly unbiased manner a particular moment in time. The pluritemporality of the sites is therefore lost (Dawson et al., 2022).

3.3.3 Three-Dimensionality in Drawing

Drawing in archaeology is still synonymous to pen and paper, helping archaeologists to decipher material relationships which are not understandable by any other means. Schematic, interpretative, or pictorial/naturalistic and highly conventionalised drawings transform a three-dimensional, colourful, and freely defined real world into a flat, linear, and colourless production (Leibhammer, 2000, 129; Piggott, 1965, 165). Excavated features are translated into flat lines: edges become fixed, silhouettes clearly defined, and black lines delineate space (Ford, 1993, 319). In the physical world, however, objects are not flat and do not have clear edges, while outlines are diffuse and multiple. Therefore, drawing diminishes the sense of three-dimensionality, while personal choices, the angle of view, and perspective distortion cause further misjudgements regarding shapes and edges (Griffith et al., 1996, 97). Also, colour variation in soil, which is essential for understanding slight changes in contexts, is not depicted in drawings, which are typically in black and white format. The depiction of texture by using stippling, hatches, lines, and gradations of tone is equally problematic, as it relies on project-specific conventions, and in most cases, little indication of texture is included in the drawings or in field notes. Guides for good practice also suggest that light and shade should be omitted; otherwise drawings may be misty and confusing (Griffith et al., 1996, 100).

Drawings are subjective responses to the immaterialisation of the world, and as such, they always vary; this is not only due to different perceptual capacities and skills but also due to illustrators’ style and viewpoint and their decisions about what to include and omit (Morgan & Wright, 2018). Illustration, as is the case with photography and text, is an interpretative act.

4 A Leap of Faith: Revisiting Paradata

Archaeological remains are translated into different chronotopes, both during an excavation and during documentation, study, and (re)construction. Although we argue for the need of more and better provenance documentation (Reilly et al., 2021), the identification of provenance becomes a complex multifaceted pursuit since the origins of the decisions we make and of the materials produced are framed and afforded by data structures and standards, conventions, limitations of tools and methods, cognitive mechanisms, and personal capacities. As a result, the argument that we can go back to the initial information or that the translation process can be circulated (Witmore, 2004) should be challenged.

Many scholars have addressed the seer complexity of paradata, especially in the case of (3D) heritage visualisation. Turner (2012), for example, argues that the formation of understandings via visual perception can be complex but also confusing and wonders if paradata could provide a solution or a curse. Devlin (2012) also addresses the complex nature of computer graphic simulation arguing that there are many factors, including the inherent limitations of technology as well as visual perception that make transparency challenging. Reilly et al. (2021), on the other hand, by applying an art/archaeology approach to archaeological practice, discuss the ontological shifts that conventional recording methods undergo and argue that paradata become elusive and illusionary. Similarly, Börjesson et al. (2022) highlight paradata’s technical and epistemological heterogeneity and the challenges in identifying and analysing them due to their different levels of completion, writing style, and nomenclature. Other scholars have also gone a step further, suggesting that the concept may be counterintuitive and may need to be reconsidered or even abandoned. For example, Havemann (2012) writes about naive paradata (p. 158) and proposes a more ‘reasonable’ approach (p. 159) in which only meaningful paradata are preserved. Mudge (2012) suggests that paradata may have to be retired and replaced by the Lab Notebook, while Schreibman and Papadopoulos (2019) argue that even if it was possible to document every decision, documenting and representing the rationale for such decisions is an unobtainable task.

The concept of paradata, which has become almost synonymous with the London Charter, has haunted the field of (3D) heritage visualisation. This is because, although post-processual in nature—aiming at making space for variables and multiple interpretations—its elaborations and suggested implementations have been largely underlined by a processual discourse according to which rigour and scientificity can make processes reproducible. In 2006, when the term was first introduced in 3D heritage visualisation, the field was carrying a heavy baggage: that of photorealism and constant shifts in technology. However, 15 years later, and considering both the evident lack of systematic integration of paradata practices in 3D visualisation, as well as the move towards alternative forms of scholarship, the recognition of atypical outputs, and the renegotiation of established norms and practices, paradata need to be reconsidered. This chapter argues that although paradata is a very much needed concept, it requires a leap of faith.

Attempts to standardise the documentation of paradata and take away the roughness inherent in the processes and protocols employed in archaeological practice (Börjesson et al., 2022) do not do justice to the richness and flexibility that the concept of paradata provides. The need for standardisation has been dictated by the emphasis that the research community has placed on issues of transparency, without considering that the inherent problems are not dissimilar to the issues faced in conventional means of representation and research outputs. Two decades ago, scholars started embarking on photorealism, and therefore the need for transparency and authenticity emerged. This was also deemed to be the means through which the 3D scholarly community could respond to the criticisms of a more conservative research community that did not have the capacity to deal with outputs that were not part of the canon. However, we have had enough exposure to such products over the years, and thus we are more able to evaluate 3D scholarship. Although we may still require new literacies to decipher its products, it is not reasonable to put the entire burden on the creators of such outputs, by asking them to transform their scholarship into forms that correspond more closely to the research outputs our field has been accustomed to.

How reasonable would it be to ask historians, who, for example, write about a historical event, to compile lists with sources, correlations, and hypotheses in order to prove that there is a linear relationship between the sources and their interpretation? If this is not an expectation we have from a historian or any other humanities scholar, why should a 3D visualisation scholar be an exception? Why should a 3D visualisation be accompanied by additional documentation that accounts for every decision and the factors that influenced those? We need to accept that scholars who use 3D for analysis, synthesis, and knowledge communication have the necessary scholarly expertise to decide what aspects of their decision-making need to be communicated to their intended audience. Consequently, we need to trust that the recipients of that scholarship have a sufficient understanding of what such scholarship entails and, thus, can evaluate research outputs. At the same time, we need to accept that our processes have inherent biases and contaminations. As Baker (2007), who first used the term paradata in heritage visualisation, argued, in the process of creating connections to transform data to wisdom, the knowledge chain will get contaminated by conscious and unconscious variables.

How to do paradata then? Is there a minimum level of detail and an appropriate form for our information gathering and decision-making that will be adequate? And is it only paradata that we need to communicate or also the peridata (Gant & Reilly, 2018), i.e. the decisions about what has been included or omitted as paradata? Is this even feasible or useful? And how can or should we account for all the chronotopes and pluritemporalities (Dawson et al., 2022; Reilly et al., 2021) we produce in our practice and the data hidden in our conscious and unconscious processes? Do we need to move towards the systematisation of paradata or could we see the inherent roughness in our practice as an opportunity for reflection and self-expression (Börjesson et al., 2022)? Lastly, since changes in technology and obsolescence seem to be inevitable in 3D scholarship, should paradata also be able to dynamically adapt to the changing ecosystems we work in? This chapter does not offer a response to any of these questions but only a perspective that problematises the great range of variables that influence knowledge production, thus demonstrating that the guidelines set by such charters must be revisited. Tracing the interpretative process through lists or trees of hypotheses becomes onerous and counterintuitive and neglects that decision-making is sensory, embodied, and multitemporal, as well as a sociocultural, situated act.

3D modelling, similar to recording in excavation, is not a passive transformative process but a choreographed (Huvila & Sköld, 2021) worlding (Pijpers, 2021), which makes the modeller think about the translation of the archaeological material into a computer programme. Such programmes enable their operators, through tools and conventions to produce the attributes of three-dimensional space; to do that, they also require skills, and thus, personal capacities and choices, while the affordances of technology also play a major role in this process. Therefore, similarly to the process of documentation during an excavation that is dependent on a wide range of often neglected perceptual, physiological, and technical factors—therefore generating an impenetrable black box—mechanisms of reproduction should also be challenged, not necessarily to dismantle their black boxes but at least to raise awareness of the variables and factors that invalidate the argument that 3D visualisation should be a reproducible act.

The author has argued together with Schreibman (Papadopoulos & Schreibman, 2019; Schreibman & Papadopoulos, 2019) that there is an imperative need to move towards a different paradigm and has proposed the theoretical and methodological framework of 3D Scholarly Editions: a framework that allows the production of an ecosystem around 3D scholarship that has the potential to enable and stimulate the scrutiny of authenticity and the rethinking of what paradata should be and how should be captured. By looking at 3D as a form of text, we are permitted to build an intertextual network that provides the potential for linking the editorial, epistemological, and technical processes involved in 3D knowledge production. Thinking of 3D as text is not problem-free and adds further complexity to an already complex process of interpretation. For example, who is the author of that text and what is the role of the editor? Is the 3D modeller the author and is the editor the person who annotates and contextualises the model? In this paradigm we have argued that the goal of a 3D Scholarly Edition is not to remediate the intention of the author (i.e. the modeller) but that the modeller is another kind of editor in the text’s (re)construction. There are also further complications to this model, especially if we think that the role of the editor can also be assumed by non-human actors, e.g. in the case of dynamic annotations and Linked Open Data. In this model, we do not propose to see the editor as someone who testifies epistemic authority in the process of knowledge production, but as someone who is allowed (and enabled by a 3D Scholarly Edition technological framework) to construct a knowledge site that will provide the scholarly community with tools for ‘prying problems apart and opening up a new space for the extension of learning’ (Apollon et al., 2014, 5–6).

The leap of faith is presented here not just as a colloquial concept but also as a framework that opens up new possibilities for looking at paradata—especially in the context of assessment reform and non-typical outputs—and breaking away from the originally suggested rigidity and standardisation. While the connection between paradata and research assessment may not seem obvious, it is important to consider that paradata in heritage visualisation was suggested as a means to promote transparency, and in response to the criticism that 3D scholarship failed to adhere to established standards and practices. In this regard, the leap of faith provides a conceptual framework for ensuring that research processes and outputs are open, transparent, and inclusive; it emphasises the importance of diverging conventional notions of scholarship while also trusting researchers’ ethics and integrity. This is in line with the recent Declaration on Research Assessment (DORA, n.d.), the Agreement on Reforming Research Assessment (CoARA, 2022), and the Dutch Recognition and Rewards programme (R&R, n.d.), as well as discussions, especially in Digital Humanities (see, e.g., Nyhan, 2020; Schreibman et al., 2011), about expanding the understanding of what scholarship means and how to recognise and evaluate work that falls outside of conventional venues. Seeing paradata through a leap of faith, then, can facilitate this process and smoothen the transition to more contemporary conceptions of scholarship.

5 Conclusion

Using as a starting point the principles set out by the LC and its various implementations, this chapter attempted to look back at conventional archaeological practice and problematise processes and products of interpretation. The premise of this chapter is that the creation of a 3D (re)construction requires us to look back at the unearthing of data and try to decipher the processes deployed in their documentation. By presenting the principles of three-dimensionality, both in terms of perception and recording, this chapter showcases that a 3D (re)construction is not a linear process and does not happen in a single black box since every element of the visualisation process is by its nature bounded in a black box. Since knowledge is built through perception, and individuals’ perceptual abilities vary, the mechanisms of knowledge production, and consequently the resulted knowledge, vary too. Perception is a complex mechanism, influenced not only by our senses but also by our experiences and memories. Besides, it is our body, which is the decisive factor in the formation of understandings, by providing the sensoria through which experiences about the world are structured.

Considering that there are such complex processes that make ‘the joint production of actors and artifacts entirely opaque’ (Latour, 1999, 183), this chapter proposes that the concept of paradata—at least in how it has been interpreted by the LC, needs to be revisited. Instead of arguing for a process that will standardise the capturing of paradata, e.g., to make them machine readable, 3D visualisation requires a new approach—what I call a leap of faith—that aligns with our increased capacity to deal with, as well as recognise, evaluate, and reward 3D scholarship. The inherent roughness and lack of systematicity that 3D visualisation entails and the fact that paradata is ‘bound to be incomplete’ (Huvila, 2022) should be seen as an opportunity to develop new frameworks that will enable the authors and editors of 3D models to break free from the shackles of the LC and develop embodied productions of materiality that can do justice to the ‘unbearably complex’ (Huvila, 2013) nature of 3D (re)construction. In such a way, the reconceptualisation of paradata within a framework that allows us to produce 3D scholarship that can be seen as equal to other forms of (digital) scholarship can provide the means to better integrate less typical outputs into our fields and thus expand the textual, visual, and multimodal vocabularies of knowledge production.