Keywords

1 Introduction

Since 2010, there has been an increasing shift from debating the value and modes of application of 3D models in the formal documentation of features of archaeological sites (Callieri et al. 2011; Koutsoudis et al. 2013) to their use as a common toolkit, integrated with other spatial technologies (Huang et al. 2016; Pepe et al. 2021). The initial excitement and novelty have given way to routine acceptance and utilisation. In particular, the emergence of technical methods that can use simple consumer-grade cameras and inexpensive software on home computers (and even laptops) has enabled what may be considered a revolution in on-site recording. Especially in complex, stratified or urban settings, the limitations of 2D, which requires multiple phase plans, and the difficulties of plotting features that are vertically separated from a few centimetres to several metres have been addressed by a 3D product that simultaneously documents each stratigraphic layer to within almost a centimetre (Martínez-Fernández et al. 2020). These models can be rotated or truncated, and drawings and sections can be extracted. Even when an excavation takes place in phases or over seasons, previously unarticulated parts of the excavation layers can be digitally joined together and restored in a 3D environment. In this sense, this new tool for recording spatiality in a predominantly visual way filled an existing niche within the discipline, and so was adopted with relative ease. A brave new world seemed to open up, accessible even to a novice, with familiar tools that were already present in research facilities (in our experience (JP), a camera and a scale were enough to start in 2013, with the integration of total stations and GPS rovers following soon thereafter). Indeed, in the excavations at the site of Keros in the Cyclades in Greece, a fully digital recording system was in place, which rapidly integrated 3D models into a seamless document-interpreting system (Boyd et al. 2021).

However, while the method used to generate the data was relatively novel (Koutsoudis et al. 2013), the underlying utility of the data thus produced was not. By that point, the distinctive features of 3D-model generation technology had already been present in cultural-heritage management for years. Looking back, a contemporary practitioner would find familiar sights produced by portable scanners and image-based reconstruction software, as well as various uses of non-metrical data from digital cameras for 3D digitalisation work (Beraldin et al. 1999; Pieraccini et al. 2001; Bohler et al. 2001; Maestri et al. 2001; Koisitinen et al. 2001). Moreover, not just the equipment would be familiar: the whole zeitgeist accompanying the vigorous adoption of 3D digitalisation seen at the beginning of the twenty-first century has continued for decades with undiminished passion (see, e.g., the affectionate title composition of Jones and Church 2020, ‘Photogrammetry is for everyone’).

2 The Affordability of 3D Scanning

The generation of 3D models from digital photos came to be known colloquially – if not entirely accurately from a technical perspective – as ‘photogrammetry’, and more specifically as ‘image-based modelling’ (Remondino and El-Hakim 2006; Quan 2010). It triggered a new wave of enthusiasm in cultural heritage, introducing a great number of field archaeologists to the possibilities of 3D documentation. Some identify the period of change with the growth of the concept of photo-tourism (Snavely et al. 2006, 2008). Indeed, the volume of photography generated since the introduction of the digital camera has been exponentially greater than that produced in the era of print photography. Gone is the time of a handful of carefully selected photograph opportunities per site, to be replaced by a deluge of frames snapped one after the other in the pursuit of the perfect one. The possibilities this innovation presented for structure-from-motion solutions were clear. Around this period, terms like ‘cost-effective’, ‘affordable’, ‘off-the-shelf’ and so on started to be associated with photogrammetric solutions and lived up to their billing. Digital reconstructions of the archaeological record had become a striking sight, even when made by ‘untrained’ hands using off-the-shelf equipment. The wide range of applications for image-based modelling methods and the rich output it produced, combined with its affordability, led to a steady increase in its use (cf. Remondino 2013; McCarthy 2014; Dubbini et al. 2016; Carvajal-Ramírez et al. 2019; Pakkanen et al. 2020). But even this rapid uptake was more of a change of pace than a radical innovation.

The foundation upon which this quiet revolution was built can be traced further back into the past, all the way to the late 1970s, by way of the 1980s and 1990s (Snavely et al. 2008; Granshaw 2018). Image-based modelling programs marketed today reflect the function that photogrammetry has had for more than a century: resolving 3D positions from disparate camera views (Granshaw 2018). Rather than constituting a fundamental innovation, the rise of image-based methods was largely a matter of fitting the shoe to the foot. This unfolded over many years, although the pace was perhaps determined by technological restrictions and cost more than by the concept itself. Recent changes in hardware availability and affordability, coupled with software becoming better optimized and more user-friendly, have also been a critical development (Hostettler et al., this book, Chap. 11). We can also observe that there was a process of accumulating experience at work. As the technology and techniques became established, a shift occurred from a handful of researchers/vendors/projects with access to the tools and resources to a generation of dedicated users – many of whom could be called ‘digital natives’ – embracing and using the technology. Their combined experience, as well as their dialogues and cooperative endeavours, would eventually come to embody a new, more mature mentality in 3D content production in general and in archaeological fieldwork in particular.

It was not just that the photogrammetry and computers became available to the masses – dedicated systems for 3D scanning had also stimulated non-specialist users to develop creative technical solutions. Budget-friendly scanning set-ups requiring some DIY skills emerged, which used a line laser and camera sensor as the basic tools for 3D digitalisation (Winkelbach et al. 2006, but see also Zagorchev and Goshtasby 2006). This occurred around the same time as the appearance of photo-tourism, and once they were market-ready, these and other 3D scanners were very effective, opening up the possibility of digitalising artefacts in-house (Todorov et al. 2013) and, more importantly, of digitalising them wholesale (Revello et al. 2016). Similarly, while the equipment adapted to environmental/terrestrial 3D scanning remained sufficiently expensive that it was restricted to the realm of specialised and comparatively costly projects, mobile and lightweight designs were developed to the point where one operator was sufficient to produce accurate measurements of complex exteriors/interiors. This was a major improvement over the cumbersome techniques that had been used only a few years before (see Koisitinen et al. 2001). Once again, the game-changer here was that the operator did not need to have a narrow background in surveying or infrastructural design. In our experience (JP), Faro Focus3D from 2015 was an accessible environmental digitalisation tool, requiring no formal training.

From publications to popular platforms like Sketchfab, it is evident that professional archaeologists have become familiar with documenting in 3D, and indeed a great many can be considered proficient practitioners. The availability of high-quality models from both institutional and private model collections demonstrates the presence of a multitude of well-trained and talented model-makers. Clearly, the novelty has come and gone, but an ongoing fascination with digital replicas and a replication remains, in the form of a certain excitement. That said, the potential of the data that has already been generated still has not been exploited. The aforementioned experienced users have both the know-how and the acumen to apply their knowledge effectively, producing 3D content that exhibits nuances that can be considered the product of expert choices rather than automated results. The focus of the discussion is now shifting to new and different agendas. We can make models in site-based archaeology that can be easily circulated and that have archival and analytic potential, but the pressing question is how we can better use them and integrate them into existing, or even new, study routines or workflows, and in particular into artefact study. In seeking to solve the problem of how to effectively manage collected data, we are increasingly asking why we should generate 3D documentation in the first place and, more to the point, how we can more effectively use this to encourage the dissemination of knowledge to a broader audience, including beyond academia. For example, in research funded by the Irish Research Council and published in a dedicated volume of the journal Open Archaeology, 3D practitioners explored how models of material culture can create dialogue within and beyond the field, as well as encouraging new ways of thinking about materiality itself within the discipline (Opgenhaffen et al. 2018). The question posed was: how can digital versions of real things encourage novel ways of analysing, understanding and disseminating these objects that go beyond conventional methods?

3 The Aggregation of Data

3.1 Introduction

This section will identify general steps and key decision-making points in the workflow, in order to reveal the variety inherent in the data structure over the course of the production process. In brief, both core data and the final 3D model can be reinterpreted multiple times, for various purposes, with each stage generating new sets of information to incorporate into the storage scheme and the production pipeline. All reported decision junctions in the process multiply by the number of the objects that are intended to be documented in 3D.

At present, there is a massive user community behind almost every commercial and non-commercial solution for 3D scanning, be it hardware or software. Photogrammetry and affordable 3D scanning has been embraced in many industries. Some of these have common interests with the cultural-heritage community, and archaeology in particular. More importantly, through commercial interactions, some will be better networked with software companies, enabling them to influence the direction of future development and customisation of the market and the appearance of the standards used for the collection of 3D information.

3.2 Main Contributors

We use the term ‘dedicated’ to refer to all of the many prefabricated, commercially available 3D surface-scanning solutions, and we use ‘image-based scanning’, ‘image-based method’ or ‘photogrammetry’ for custom-made scanning rigs, which commonly rely on the use of consumer-grade cameras. We are, of course, cognisant that there is some overlap between the first two categories. Finally, we use ‘volumetric scanning’ when speaking of CT and micro-CT scanning. Although we recognise that archaeological 3D content can be created through the usage of instruments and techniques (contact 3D scanning or manual modelling), we have chosen to omit these from consideration here due to their relatively limited presence in comparison to other solutions.

Although dedicated systems are usually made to service particular industry needs, including scanning the built or natural environment, there is frequently an overlap between potential uses. A scanner mainly designed to document objects may also serve, to a limited extent, to scan sections of the environment. Likewise, a scanner for the environment may be used to scan objects, generally larger ones. For present purposes, both will initially generate datasets that require some level of postprocessing and can provide or exert the use of proprietary formats. The number of proprietary formats is likely as large as the number of vendors dealing with scanning equipment. The processing of the data might include conversion into a data structure that is more familiar to the operator, followed by the mechanical trimming of scans (and it may happen that there is not just one scan, but hundreds), statistical trimming of outliers and clutter data. If the target was covered with more than one scan, the scans will need to be co-registered, typically using manually or automatically introduced reference points. The whole collection of scans will probably also need to be georeferenced using a state coordinate system or some other project-prescribed (locally devised) projection. The project parameters might necessitate a workflow that favours working with point or surface data or particular colour information assigned to the data. In order to achieve this, it is commonly necessary to switch between a range of different software options in order to make each necessary modification. This, in turn, can create new types of information which need to be indexed and incorporated into the project scheme.

The image-based methods, recognised for their low barriers to entry, depend on the collected imagery, be it a predesigned acquisition or a haphazardly collected/crowd-sourced one. A key case study for this latter approach in the cultural-heritage sector was the 3D reconstruction of the Temple of Bel and Arch of Triumph (Wahbeh et al. 2016; Wahbeh and Nebiker 2017), as well as other sites (Vincent et al. 2015; Vincent and Coughenor 2016; Stathopoulou et al. 2015) that were destroyed by militant groups or natural conditions. These methods may combine images taken from a variety of devices and formats (phone camera or even video), in various resolutions. Consequently, there may be less control over output quality. The ideal situation is to work with images in RAW format, which allow a high degree of user intervention and have no compression factor.

For example, a complex lighting scenery can be effectively mitigated by flattening illuminated or concealed sections of the scene or object, in order to colour-correct and enhance the image. This is the author’s (JP) preferred means of quality control. It requires processing images in RAW format using software capable of converting the processed image into a consumable format, such as a lossless TIFF or a compressed JPEG. In the next stage, a researcher must choose which program to use. What is rarely acknowledged is that while image-based modelling is a passive method of scanning, the software is less passive: it creates its own dependencies during production, cached information to facilitate the reconstruction or to simply speed up different phases of work with the images. These branch out beyond the scope and location of the data being worked on, requiring management to prevent cluttering. Moreover, specific segments of the operation can be saved for posteriority or exchange between different programs. For example, solved camera networks (positions of the camera in space during data acquisition), depth maps and image masks can be transferred from one operation to another, possibly with different spatial extents (data chunking). Similar to the data generated using dedicated systems, object/space geometry can be provided as a point cloud or surface, as a coded raster or as something else entirely, and can undergo further work.

Volumetric imaging methods, such as CT and micro CT, are not specifically tailored to creating surface models, although this is possible. Medical CT scans and micro CT both create a set of 2D images, which represent the basis for generated models and analytic use. Material science-focused micro CT will typically generate scans with a higher spatial resolution than medical CT systems, which are designed for complete/partial body scans. The primary application of CT data has stimulated the utilisation of standardised formats, such as DICOM, to ensure interoperability (Kahn et al. 2007), although other image options may be available. The image sequences produced are corrected for misalignment resulting from the heating-up of the source of irradiation or stage and sample movement, as well as (in multipart scanning) when the object outmatches the field of view. Post correction, grayscale projections are coded to black-and-white bitmap sections which are used to reconstruct a surface model (.ply or more commonly STL). The raw model does not contain any colour information and relies heavily on redaction, especially on contact areas between holder/support platform/wax stabiliser (valid for both micro and medical CT scanners).

3.3 Deliverables

We would like to consider a case involving a cultural-heritage office’s response to the workflow and documentation processes in the production of image-based models. In 2017, the section for the digitalisation of cultural heritage and modern creativity at the Ministry of Culture and Information of the Republic of Serbia produced ‘Guidelines for the Digitalisation of the Cultural Property of the Republic of Serbia’ (Ministarstvo kulture i informisanja 2017) which contains a table (Table 2.1) listing the property of the data collected during the process of ‘Digitalization of 3D objects by photography’. While the document is not legally binding, as the title indicates, it is intended to provide best-practice guidelines. It does not engage with the management of data generated using a dedicated 3D surface or volumetric system. Rather, its core focus is on image-based scanning, which is no doubt a result of the perception that a lower starting investment is needed for image-based modelling than for more custom-built solutions. As a consequence of the combined ease of access and affordability of image-based methods, this is, in part, also a product of a greater degree of engagement between practitioners who regularly use this approach and museum curators.

Table 2.1 Table of data properties and structures, as suggested by the MoC-RS ‘Guidelines for the Digitalisation of Cultural Property’

While the definitions in this table are incomplete and are premised on a somewhat nonorganic flow of image-based reconstruction, the importance of standardising 3D model acquisition is recognised. Furthermore, two important classifications of digital products are identified – a ‘master’ and an ‘operative’ copy – a division that had already been recognised in the context of archaeological fieldwork (Apollonio et al. 2012). The master copy may be loosely defined as containing the highest level of detail that can be achieved with respect to image resolution/GSD, tonal range and reconstruction parameters. However, the table only deals with the building blocks of the data, i.e. the images. It does not touch upon any of the other items that might be regulated, such as the technical limits of image-based methods or post-acquisition workflow. The second product is less clearly defined, namely the operative copy that would be shared and published on the web. Here, it is assumed that a second dataset must be produced, with severely truncated image attributes, in order to make a low-tier, but more manageable 3D model. In this way, the bottleneck introduced by hardware requirements for the effective manipulation of complex 3D models (‘complex’ here refers to highly detailed renditions) is at least partially removed. It is worth noting that when it comes to rendering 3D content – e.g. objects, interiors/exteriors and landscapes – the gaming industry has tackled the problem of the ‘budget’ of a 3D model, in the sense of the size and complexity that is optimal for successful handling. The most recent example is Unreal Engine 5 (Unreal Engine 2020), which boasts the capacity to store and display immensely detailed geometry with no need for extensive optimisation. Gaming-engine solutions are increasingly attracting the attention of archaeological practice (Rua and Alvito 2011; Eve 2012; Challice and Kincey 2013; Smith et al. 2019).

The master and operative copy are relevant for all other means of 3D content creation. The way in which the action is presented involves ‘pre-digitalised’ and ‘post-digitalised’ versions of the object, while the 3D scan is the intermediary between them. Any post-editing involving changes in size or the correction of geometry errors, which is highly necessary, represents a step away from the object and its original state. The digital model will henceforth be a somewhat distorted mirror of the original. Greater optimisation then takes the 3D model even further away. Due to this difference, after an object-specific threshold is reached, it will become difficult to reconnect with the original object, unless the general form is conserved. However, it will be easier to manipulate and combine several models into the same working space, introducing them into 3D editing and rendering software or preparing them for additive manufacture, and thus prolonging the life of the digital copy.

This data will be most commonly conveyed in the form of a surface model and, more often than not, a photorealistic rendition of the object. The intended use of the digital version and the technical limits of the digitising solution will determine the parameters of geometric/texture detail of the ‘master’ copy. The transition from master to operative copy will require further levels of engagement and a number of new iterations.

4 Data Management

It is frequently the case that tens of gigabytes of data in image format are required to make a model, and the master model is itself often a sizeable product. When considering scanning collections, data curation and management – including substantial storage solutions – becomes an issue for workflow planning.

By this point, it should be clear that there is a whole world of relationships between hardware, methods, project goals and outputs that should be considered when investing in 3D digitalisation. Each of the approaches creates its own file structure, which is heterogeneous across the selection and has a record track of producing repeating and perishable deliverables, where the researcher is obliged to climb a ladder of processing steps, each with its essential characteristics that might not have a place in the world after that vantage point has been superseded. A key consideration concerns what should be saved. There is a temptation to think that responsible curation requires us to save every piece of data generated, not only so that it is easy to access an old project and review all the processing decisions if something goes wrong in the process of reconstruction, but also so that, as technology develops, issues of accuracy can be revisited. This is well illustrated by the practice of testing ‘old’ datasets against updated versions or entirely different program solutions (Marqués 2020). Furthermore, certain objects are inherently fragile, meaning that the data used in model creation may be reutilised by other parties in the future for similar or different ends. If data from all the different stages are archived, any researcher can easily go back to a particular stage and rerun the process from that point onwards or analyse the errors and subsequently correct them, potentially with additional image editing or the introduction of additional image information. However, this comes at a hefty price, in terms of the exponentially growing intricacy of the storage strategy.

There is a sort of doomsday-preparedness logic to holding on to all the data produced in relation to the 3D digitalisation process. Creating 3D content using archaeological collections or contexts is primarily complementary to, or at least closely related to, other research objectives connected to the study of particular objects and a desire to learn more about past peoples through an object (Molloy and Milić 2018). While one of the objectives of digital models is to create objects of dissemination, the archiving principles that lie behind them are not governed by national or international policies that we are aware of and, as a result, the digital ‘artefact’ enters broad circulation in isolation from the source data used to create it. Since the workflow is object > data > model, the model is, in most cases, operating at a remove from the object, due to intervening decisions, and the means for critiquing this relationship or the fidelity of the model are restricted. In this sense, there is a need for a sustainable archival doctrine dictating that access be provided not just to the source files, but to the intermediate steps and the final master model, along with all of its derivates. This is an important consideration with regard to data and workflow transparency, because, at a bare minimum, the relationship between the object, the source files and the model should be archived, although arguably we also would need access to the ‘thought process’ which formed the model (Grimaud and Cassen 2019). Data-source specifications, autogenerated reports, software setups and the like are milestones on the path from acquisition to the 3D product. These milestones might be changed and improved upon, but understanding how one got from the object to the digital copy, which is once, twice or many times removed from the original, is paramount. Nonetheless, even when an optimised approach has been adopted, some friction can occur. Dedicated digital repositories for social-science research at UCD (University College Dublin) that were contacted by the authors (BM) were unwilling to archive the data and models, due to their large size. As a result, they are stored at the university on hard drives, a solution not considered sustainable, even when shared with the public (Breaking the mould 2015).

5 The Reasons for Creating 3D Models of Objects in Archaeology

The utility of 3D models in archaeological research is firmly connected to the issue of representation in archaeological science, in the sense of the way in which we choose to show the mundane and exceptional in our collections. The shortcomings of 3D models have already been enumerated. One issue is that they provide a single viewpoint, in contrast to schematic illustrations, which are capable of conveying or emphasising features of interest to a given narrative in a publication (Carlson 2014). In this sense, they are limited in their capacity to convey important details of an object without a supporting narrative or otherwise externally introduced enhancement. They are more difficult to browse than images in a collection, be it a book or a PDF. (Molloy and Milić 2018). There are also limitations in the prioritisation of the visual features and an inability to engage with the artefact using the haptic senses, i.e. the physical experience of contact (Dolfini and Collins 2018). Finally, there is also the loss of nuances of texture, taste and smell (Eve 2018).

That said, a digital rendition or a printed copy of this rendition provides the user with novel forms of engagement. In the case of objects shielded from the public by glass or held in storage in museums, they provide a substitute for direct contact. The viewer can choose how to engage with the object on their own terms and in their own time, unencumbered by space or other viewers. At the time of writing, the global Covid-19 pandemic has resulted in the unprecedented and – until a year ago – fully unexpected closure of museums around the world. Social and academic engagements have, in turn, ushered in a responsive, but also socially (if not technically) novel reliance on digital means of communication and interaction. From the classroom to the exhibition hall, conventional means of engaging with and experiencing the world have changed. As a response to a short-term scenario, many have observed how total immersion in remote interactions has made them rethink material engagements, including accessing cultural heritage in corpore. The Covid-19 lockdown responses will likely result in stories or published papers that will elaborate further on the potential use of 3D modelling for facilitating access to heritage spaces and places. When it comes to scientific research, models clearly boost access and engagement, with a resolution suited to several research applications. This said, the fidelity/authenticity conflict mentioned above means there is a mismatch between these new uses and the established practice of examining materials.

It may be fair to say that as a consequence of how knowledge of archaeological materials and technical steps in 3D modelling of archaeological finds are chosen, an element of artisanal skill can be recognised. This relates to preferred workflows, tips and hacks, but is not limited to them. It is a reflection of important aspects of the production of digital replicas, namely the intervention of the skilled practitioner who expedites the task (Dolfini and Collins 2018). It is through an understanding of the material, artifact or scenery, as well as what a model is intended to represent, that the relevant approach is put in place. This requires observation of the qualities of the object and an evaluation of the solutions at hand. For example, modelling a shiny surface, such as obsidian or glass, is a complex challenge and there are a range of technical solutions, from modification of the object (i.e. coating it) to lighting the object in a specific way to filter choices on the camera lens (Hallot and Gil 2019). It is important that experienced archaeological practitioners continue to provide guidance on the varieties of solutions, pipelines and outputs that deliver effective results. We see the DCHE Expert Group report (European Commission 2020), which contains tips and principles, as an important resource in this regard. The way forward appears not to involve the modeler serving as an outsourced independent technician or a solitary asset, but rather the knowledge itself being integrated into research teams from the outset. In other words, it should be done in the same way as ordinary artefact processing, with a close and comprehensive understanding of the flow of procedures and research objectives.

Let us present a case study to illustrate the potential of 3D models of artefacts in archaeology to have a post-creation afterlife in the analytical sense. In Bronze Age Europe, stone moulds were used to cast liquid bronze in order to create objects. While ceramic moulds were also in use, these are not our focus here. These moulds would go out of circulation for a variety of reasons, perhaps most obviously when they were physically broken. However, hoard deposits of stone moulds have been found in various parts of Europe which indicate that they were intentionally, not accidentally, removed from circulation for the purpose of deposition. This deliberate activity is confirmed by the frequent inclusion of both halves of two-piece moulds. Although they are not broken, we may question the expected lifecycle of such objects. Each time a casting run is conducted, there is micro-wear and tear inside the mould as a result not only of thermodynamic forces, but also of physical abrasion and degradation when the bronze object is removed from the mould. Decorative details, such as raised ribs (which are manifested in moulds as negative spaces/incisions), would eventually lose their fidelity following numerous casts. Moreover, features such as the loops in looped, socketed-axes or spearheads require spurs in the mould to create the hole corresponding to the loop. Should these be degraded or truncated, they would no longer fulfil their function. A 3D model of a mould, therefore, defines the character of that mould on the last day it was used by a smith. With a 3D model of a two-piece mould, it is possible to virtually fill in the mould, creating a positive object from the negative space. In the past, this would have been accomplished using liquid bronze, but the resolution of the digitally recreated finished object will be higher, rather than lower, because there will be no microdamage or incomplete filling of spaces. To explore this possibility, we completed a 3D model of a mould from the National Museum of Ireland, and created a positive version of it. It was immediately evident that the spaces for the decorative ribs had been degraded, and the virtual model had a relatively poor definition of these features (Fig. 2.1). From a craft perspective, it was evident that this mould was at the end of its use-life and required either rejuvenation or discard. The choice to discard this particular mould set was, therefore, arguably made because it had reached a particular stage in its lifecycle as revealed by the 3D model. Of course, this same objective could have been achieved using putty in the real mould in the museum, but this would carry the risk of staining or otherwise tarnishing the original object. Even leaving this aside, our original reason for modelling these moulds was to compare them with other moulds, and so this re-using of the digital model reveals a potential ‘second life’, beyond the vision of the original researchers/modelers. Furthermore, a digital axe that has been created can be measured and a weight approximated on the basis of its volume. A cross-section can be extrapolated from any perspective, and an outline suited to 2D geometric morphometric analysis can also extrapolated. This points to a range of uses for the 3D model, which emerge only from the study of the digital version of an object and which are less easily undertaken with the original.

Fig. 2.1
3 photos. 1. A closed stone mold and a metallic cast resembling an axe head. 2. A metallic cast resembling an axe head. It tapers towards the bottom. 3. An open stone mold with the metallic cast resembling an axe head inside.

Reconstructed positive cast of Ir the BA mould from the National Museum of Ireland, visualised in Blender. (Images acquired by Barry Molloy from the National Museum of Ireland, 3D model produced by Jugoslav Pendic, Blender 2021)

6 Disseminating Knowledge

Given how long 3D modelling has been widely accessible, the lukewarm response to its use in the post-scanning stage should caution us that the utility of 3D models may remain niche for the foreseeable future. That said, this is no excuse for those working in this area to not continue to develop best practices and be aware of evolving industry standards, as regards both the technical and the curation aspects of model-making. The development of effective data-management plans should be integral to this, as well as efforts to ensure, so far as is practicable, that FAIR (Findable, Accessible, Interoperable and Reusable) principles are implemented, if 3D modelling is truly to be seen as opening up access to research objects. Given the copyright status of objects in some museums or collaborative dissemination agreements, a defined strategy for dissemination (Hostettler et al., this book, Chap. 11) should be included in project designs. This is not intended as a lofty claim to signal virtue in research, but rather reflects a need to update long-established protocols for reuse and sharing, which may leave both researchers and institutions exposed to legal or moral complications. As an agenda, we believe at a personal level that the notion of 3D models being seen as a ‘democratisation’ of cultural heritage is a good thing, but that this must exist within a regulatory environment to protect all involved. The case of the unauthorised modelling and dissemination of the bust of Nefertiti in a Berlin museum (Voon 2016) is a case in point. We acknowledge that ownership of cultural property is a complex issue, but also affirm that, as professional practitioners representing our field, we bear responsibilities for the reuse of our models.

7 Conclusion

The fundamental issue concerns not merely the technical aspects of how to scan something using 3D technologies, but also how to responsibly manage the output. Perhaps the first step has already been made by acknowledging that 3D scanning is a complementary type of artefact study, rather than a concentrated solution that can replace previous systems for communicating archaeological objects (Horn et al. 2018; Molloy and Milić 2018). In this regard, and to conclude this paper, we identify the following points as salient:

Documenting the process of 3D content creation should be part of the analytic programme, in particular documentation of how metric data has been accurately encoded. This is needed if other users are to have confidence in the data represented by the model, which is necessary for its reuse. Moreover, this is even more relevant when multiple 3D objects are used in a comparative research workflow assessing metric or morphological relationships between objects,.

The reusability of the 3D content and its origin data is a huge gift made by the method to the research community. A single documented piece of scenery or object branches out, incorporeally, to a list of alternative use-cases: they are given form and body at the researcher’s leisure. However, this raises an issue, that can easily be identified by even a layman observing from outside the milieu of archaeological and cultural-heritage management: can the ownership over the digital copy of a cultural monument or artefact with cultural property be contested and can the digital copy be protected? If we boil it down to the simplest variables, there is always a subject/item of cultural importance, with ownership claimed by the state (and probably several other stakeholders as well as), a purpose motivating the creation of digital content, a craftsperson (or craftspeople) who facilitated the process of production and an audience it was intended for. Each of these variables can be made infinitely complex, but most of them can be regulated by rigorously considering whether the caretaking institution of the physical object or mechanism wishes for exclusive rights over the product or stages of production. They may even wish to economically exploit the end outcome (Borissova 2017). One thing is, however, clear when it comes to 3D models of cultural artefacts – the only certain way to prevent their unauthorised multiplication or misuse is to not share them at all (we consider sharing in closed groups or through a paywall system to be a variation of the ‘not-sharing’ option).

The complexity of the workflow is currently an integral part of 3D content production in archaeology. It involves a shifting network of relationships between various industries and hardware and software development. We do not, however, consider it a battleground, but rather a challenging environment that requires preparedness, regardless of whether the 3D scanning is outsourced or incorporated into the agenda of the team/project/institution. A well-networked community of professionals – the aforementioned artisans – provide a strong foundation for moving forward on important issues related to enhancing our capacity to assess the tasks at hand, the resources available and the appropriate approaches to and quality of 3D archaeological reconstructions.

The exploration of the afterlife of 3D models and modes of reuse is key to establishing a greater degree of synchronisation in relation to this mode of documenting singular artefacts or assemblages. We presented an example in which the outcome depended on possessing a grasp of a group of skills and knowledge: how to make an accurate digital copy of the artefact via image-based methods, how to edit 3D content for viewing and processing in specialised software, and how to deliver data in order to provide an explanation for the presence of a group of artefacts in a particular archaeological context across Europe. We feel that such endeavours, in which 3D models are placed at the epicentre of research questions as tools equal to any classical ones, are the direction in which the use of 3D content to study archaeological artefacts ought to be developed.