Scholars in both textual editing and 3D (re)constructions have faced similar issues in modelling the textual/material record, and surprisingly, they have employed either the same or similar terminology in confronting these challenges. This is what we refer to as ‘a parallel path’. These issues concern the use of evidence, ways of making the decision making process visible, and ways of dealing with ambiguity. In this section we outline these parallels, which form the key features of the last section, Towards 3D Digital Scholarly Editions.
Both fields work to reconstruct the text on the basis of existing, albeit imperfect evidence (if the evidence were perfect, there would be no need for editors or editions). In both cases, the researcher is evaluating the evidentiary record, and the further we go back in this record chronologically, the less evidence remains. In many respects, the 3D (re)constructions of ancient spaces are more akin to the textual editing tradition of stemmatics applied to premodern texts, in which the editor reconstructs the original text by working backwards from extant witnesses through a set of relationships expressed as a tree structure in an effort to come as close as possible to the original text before it was corrupted (or indeed thought to be improved) through the copying process.
3D (re)constructions of sites that exist in fragmentary form, e.g. a prehistoric settlement, also go through a process of evaluation of extant data, resolving, discarding, and harmonising evidence in an effort to understand what might have existed. This can be likened to an interpretation of secondary sources documenting remains in the form of textual records/field notes, photographs, and illustrations, site-specific physical remains, and information coming from other sites which bear some resemblance (temporal or spatial) to the subject under investigation. The goal in both fields is the (re)construction of an artefact (be it a poem or an ancient building) which makes clear to the user where there are gaps in knowledge of the putative original and what methods and evidence were used in order to fill these gaps.
On the other hand, 3D (re)constructions based on (nearly) complete evidence, e.g. (re)constructions of a twentieth-century urban battlefield in which old buildings still stand, share more with traditions and methods used to edit nineteenth-century and twentieth-century texts. In both cases, there tends to exist an abundance of evidence, and the researcher adjudicates among extant sources, each of which carries its own authority. Just as texts can exist in multiple versions, manuscripts, typescripts, and printings, some or all of which might have been authorised by the writer, bringing back in 3D a building or a street or what happened in that space at a specific point in time (or indeed, over time) may utilise a plethora of evidence, including documentary, oral, and visual. This evidence may be contradictory and fragmentary, and some items of evidence may be more ‘authorised’ or credible than others.
The theories underlying the textual tradition of versioning in which the goal is not to establish a definitive or reading text, but to reconstruct, not only a textual history, but the underlying view of the nature of the text’s production (Tanselle 1995, 24) might prove a useful foundation for models of 3D (re)constructions in which what is of interest is the potential to make visible multiple states over time or alternative possibilities in interpretation. These ‘textual’ moments can be viewed as snapshots, each providing a unique, equally valid window onto the past (Schreibman 1993, 93). Textual editors have the ability to model these differences in TEI/XML within one apparatus or framework explicitly to denote regions of both shared and divergent text. Once these regions are marked, they can be analysed and visualised, providing the modeller with a method for recording difference over time and the reader with a standard notation for the decision making process. While TEI/XML is not a suitable language for 3D, the theoretical framing may well be, as it offers a way to make variation visible within a single model.
The Oxford English Dictionary defines ambiguity as the ‘quality of being open to more than one interpretation; inexactness’. It is precisely because of the ambiguous nature of evidence that textual scholarship exists. Revealing, acknowledging, and/or resolving this ambiguity is part of the interpretive process. In the transition from print to digital editing of alphanumeric texts, it has become possible to represent ambiguity more explicitly in a non-notational manner, providing the reader with the documentary evidence to mediate between different textual states (Schreibman 2002, 288–89). There have been lengthy discussions concerning the contention that simply providing the reader with facsimiles of the various witnesses is a form of unediting (Schreibman 2013), i.e. an abdication of the editorial role. On the other hand, modelling the text according to a theory such as genetic editing or versioning to make visible the writing process allows readers to formulate their own sense of the work (or work-in-progress) situated in both time and place (Machan 1994, 303). In other words, the point of this type of editing is not to resolve the ambiguity of the textual record, but to expose it through modelling.
Modelling ambiguity has been the subject of much discussion in 3D (re)constructions. While in both fields there exists the subjective nature of gathering, selecting, and interpreting evidence, with 3D (re)constructions there are less codified ways to express ambiguity (or indeed to consider whether or not it should be expressed) and to make the modelling process as transparent as possible (for a recent discussion see Watterson 2015). Representing ambiguity began to become more pressing in the mid-1990s and especially in the 2000s, as photorealistic models started dominating heritage representation. This, in turn, fuelled debates about their misleading nature (Miller and Richards 1995; James 1997; Goodrick and Gillings 2000; Eiteljorg 2000) and the problematic use of the word ‘reconstruction’ (Clark 2010). These concerns gave rise to a series of proof-of-concept implementations that demonstrated intellectual rigour in 3D models and explicated decision-making in the process of their creation as a means of counteracting their problematic photorealistic nature. These implementations included alternative reconstructions, annotations, renderings in different colours, textures, and shadings, as well as ways of activating or deactivating ambiguous features and alternative models that would in turn affect other elements (Kensek 2007).
Unlike the digital scholarly editing practices in which there exists a core set of technologies and methods (largely around XML/TEI), to a large extent 3D visualisations utilise bespoke solutions with no sustainable platforms, methodologies, or standards which might lead to the wider adoption of such practices (see for example: Archaeology Data Service Guide to Archiving Virtual Reality Projects (2018): http://guides.archaeologydataservice.ac.uk/g2gp/Vr_6-1; McDonough et al. 2010). The World Wide Web currently hosts a wide variety of DSEs, some going back to the earliest digital archives in the mid-1990s, providing the field with a tradition from which new theories, models, and editions are developed. However, due to their intensive computational nature, most 3D visualisation projects are developed for offline use and in a constantly changing environment in which software and frameworks become obsolete soon after their release. Also, the Internet itself, with constant changes to browsers, does not offer a stable environment for such projects (Ruan and McDonough 2009), providing little opportunity to build a body of knowledge and a practice-based community.
As mentioned above, the creation of 3D models under the influence of photorealism gave birth to a series of debates regarding transparency, documentation of decision-making, standards, and intellectual rigour in the process of (re)construction. Earlier, more schematic work did not carry with it the same calls for transparency, as it did not, however unintentionally, ‘trick’ the viewer into reading reality into the (re)construction. Several projects attempted to establish principles to address this new environment. The best-known is the London Charter (Denard 2009; http://www.londoncharter.org/), developed in 2006 at a meeting of 3D visualisation specialists who came up with a series of principles that would ensure a certain level of standardisation in terms of the creation of documentation, ensuring sustainability and access and articulating the aims and methods of (re)creation (Denard 2012). For example, principle 4.6, ‘Documentation of Process (Paradata); states that:
Documentation of the evaluative, analytical, deductive, interpretative and creative decisions made in the course of computer-based visualisation should be disseminated in such a way that the relationship between research sources, implicit knowledge, explicit reasoning, and visualisation-based outcomes can be understood (Denard 2009, 8-9).
While a worthy goal, it is impossible not only to document, but, as was found in editing according to the Greg/Bowers (Bowers 1978) school of copy-text, even to represent each and every editorial decision and the rationale for that decision. This would amount to providing a textual record of not only how each accidental (e.g. punctuation, spelling, word division which was considered less significant and meaning-making than substantives, e.g. word choice) was handled, but why it was handled that way. In both traditions, this goal becomes even more unobtainable, as it would involve a full time amanuenses following the researcher (or team of researchers), recording every conversation, saving every version of every document, every state of every electronic file. Even if it were possible to save all this, how could it be presented to the reader so as to make more transparent the decision-making process? This might be more akin to an archive or an assemblage, but even if this could be ordered and catalogued, what it does not necessarily demonstrate is what happens in the space between the evidence, the ideas that inform the decision-making process, tracing a path from evidence to its representation.
Despite the fact that several reports and frameworks followed The London Charter,Footnote 2 none of them tackled the inherent visual illiteracy in reading photorealistic 3D visualisations as texts. There is no scaffolding within which the 3D model is situated to provide the reader with access to its conceptual and methodological underpinnings.