Keywords

1 Introduction

This paper summarises the results of the Study on Quality in 3D Digitisation of Tangible Cultural Heritage (VIGIE 2020/654), represented in full in the extensive Final Report. The work was based on the combined efforts of the in-house study team at Cyprus University of Technology (CUT) and a group of nine (9) sub-contracted collaborators, together with individual external experts whose research inputs are included in the results.

The study was organised according to a structure of five (5) main process-oriented tasks together with separate project management and dissemination tasks. A mid-term workshop was organised to provide expert validation of the interim findings. The Final Report combines the conclusions from the nineteen (19) planned Outputs (OU) of the study.

2 The Process of Digitising Movable and Immovable Tangible Cultural Heritage

According to UNESCOFootnote 1, the term Tangible Cultural Heritage (CH) can be classified in the following two categories:

  • Movable CH, which ranges from photographs, books, manuscripts and paintings to metals, ceramics, glass, wood, leather, textiles, tapes, and other stone variations and composites.

  • Immovable CH, which consists of buildings, land, and other historically valuable items, typically with fixed foundations connected to the terrain, such as monuments and archaeological sites. In addition to castles, houses, mansions, and towers, it also includes churches, monasteries, rectories, townhouses and palaces, rural folk architecture, technical and industrial monuments, theatres, museums, plague columns and shrines. In this category we can also find underwater (shipwrecks, underwater ruins and cities) and cave sites.

Both categories contain objects often heterogeneous (made of different materials by using diverse techniques) with an inherently complex geometry and surface texture. For example, structures such as monuments with sculptural or pictorial decoration, or one of the oldest types of archaeological artefact, the jewelery, have a complex form.

The digital recording of CH is an essential step in understanding and conserving the values of the memory of the past, creating an exact digital record for the future, providing a means to educate, skill, and communicating the knowledge and value of the tangible objects to the society. Therefore, the primary goal of recording is to know and understand the values and significance of the CH object - historical, scientific, aesthetic, social, and economic. In addittion, the digital representation of CH objects, structures, and environments is essential for practical analysis, conservation, and interpretation. Selecting the ideal technology and workflow for the 3D digitisation of tangible CH objects is a complicated, very challenging procedure and one that requires careful consideration.

However, there is no internationally accepted framework or methodology for specifying the quality of detail and accuracy in CH digitisation. Documentation projects are typically determined on a case-by-case basis using the many available methods and often require significant multi- and interdisciplinary cooperation. An object needs to be carefully examined and inspected in order to define the best available digitisation options.

Therefore, the recording of tangible CH requires a thorough understanding of the stakeholder requirements, the necessary technical specifications, the existing environmental conditions, and the intended use of the final 3D model. Selection of the optimal human resources and digitisation technology are usually related to the technical specifications, size, complexity, material, texture, location, accessibility, IPR and accuracy required. For large surface areas, such as monument sites or architectural mapping, a combination of regular aerial and topographic surveys, laser scanning and photogrammetric techniques is often used. In addition to the cost of hardware and associated software, a considerable investment in knowledgeable staff and time dedicated to specialised training has to be taken into account.

Accuracy and Precision

Accuracy refers to how close a measurement is to the true or correct value, whereas precision is how close the repeated measurements are to each other. Measurements can be both accurate and precise, accurate but not precise, precise but not accurate, or neither. A reliable survey instrument is consistent; a valid one is accurate.

Planning the Process of Digitisation

The 3D digitisation of tangible CH is an inherently complex multi-stage process. Project planning should accurately address and coherently develop a documentation dataset, while keeping in mind project constraints including, but not limited to, environmental and safety conditions, available equipment, budget, and timescale.

Documentation Methods

Measurement methods for geometric recording range from conventional simple topometric methods for partially or uncontrolled surveys to elaborated contemporary surveying and photogrammetric methods for completely controlled methods. However, there is no generally accepted standard for specifying the detail and accuracy requirements for the different geometric recordings of tangible objects. Most answers to such challenges are primarily based –at the moment- on cost specifications and time limitations defined by stakeholders.

Active and Passive Recording

For the 3D geometric documentation of movable and immovable assets, the range of object sizes could start from few “mm” and goes up to a couple of thousand metres, while the number of acquired points should practically have no limit. These documentation methods may be grouped in several ways. Firstly, according to those involving light recording and those which do not. Secondly, a form of radiating energy is always used for gathering geometrical and visual information, therefore a first distinction can be done between penetrating and non-penetrating radiation systems.

In the penetrating category systems based on X-Rays devices allow the capture of inaccessible internal structures and surfaces of small objects. Similar X-Rays devices are used in areas such as medicine, mechanical engineering, on the airport security and on detail investigations by the police. On a larger dimension the use of cosmic rays are being experimented for attempting the 3D scanning of the interiors of the Maya monuments, or the Egyptian pyramids. For the non-penetrating 3D, the electromagnetic energy covers the visible and the InfraRed spectrum. The latter actually may allow a little penetration under the illuminated surface depending on the actual wavelength used, ranging from fractions of a millimetre for Near InfraRed (NIR), to several millimetres for the Far InfraRed (FIR), used in the so-called TeraHertz imaging. However, for 3D applications possible little penetrations inside the material are usually neglected, and this is the reason why light sources for 3D never go beyond NIR. Within non-penetrating devices a further distinction has to be done between active and passive systems. The main distinction between documentation methods is whether they are active or passive. Active recording methods use directed radiant energy to mark a point in space, whereas passive methods record the reflected radiation from a surface.

Indoor and Uncontrolled Acquisition

Indoor image acquisition is more often used for objects or artefacts in museums or collections (which are not allowed to be moved), typically of small or medium size, and presents several challenges. The size, weight, special illumination conditions, materials, properties of the artefacts, and their interior structure directly influence the documentation complexity.

Uncontrolled acquisition is typically used for outdoor scenes or any other environment where the conditions (shadows, weather, etc.) are not under complete control. Large scale objects such as buildings, excavations, or archaeological sites still with high accuracy demands (mm-cm) are classified in this category.

The report describes other factors and approaches concerning outdoor acquisition in greater depth, which require consideration when planning a documentation project. These include video frames extraction, object/site recording, and limitations on accessibility due to skills requirements, weather, ambient conditions, operating hours and lighting conditions, site modification and distance optics.

Moreover, it is essential to consider the state of condition and remedy options. This includes geometrical symmetry, together with factors affecting surface reflectivity, such as resolution, distance from the sensor to the object/image scale, angle of incidence, safety regulations, focal point/spot size, field of view and precision.

3 Defining Complexity

The term complexity can be described as “the state or quality of being intricate or complicated” or “the state of having many parts and being difficult to understand”. Consequently, complexity characterises a system’s behaviour or an object whose components or elements interact in multiple ways and sometimes follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions. The report explores in greater depth the meanings of complexity and their application in CH.

Defining complexity is essential because the level of complexity:

  • determines, to a high degree, the technology to be used for a documentation project,

  • offers the often-missing link between quality and the purpose of use,

  • imposes constraints on both the technology and the eventual intended use of the data,

  • connects the stakeholder’s requirements, quality, accuracy, expertise during the digitisation – and completeness if it expresses parameters like object size and random requirements.

Determining Complexity

Complexity is inherent in all different tangible CH objects and it is a critical consideration when planning a geometric documentation project. It refers to the geometric, surface/texture, material composition, and scale/application variants. In addittion, a key dimension of complexity resides in the stakeholder requirements also, which may include the location, state of condition, the set-up of the data acquisition project, the experience of the multidisciplinary operators on site, and the fusion of multiple datasets from different devices and their users/specialists (equipment and data pre-processing) into one archive that can be visualised in an easily accessible and searchable way.

The main challenges in defining a workable end-user definition of complexity relate to establishing equipment needs and acquisition methodologies – and in client comprehension of additional costs in complex processes and post-processing.

An online survey aiming at establishing perceptions of the 3D digitisation community (944 responses received from 420 survey respondents) identified the following parameters as the top three factors for increasing complexity: a) Surface conditions; b) Site access; and c) Quality of scanned data.

Moreover, interviews during our work with 49 key stakeholders and skilled professionals in 3D digitisation showed that among the factors most frequently mentioned were the importance of the stakeholders’ requirements for 3D digital documentation frameworks, object conditions before/during the recording process, location and environmental conditions during digitisation, and the levels of expertise of the people involved.

The challenge is to manage all related activities that run simultaneously during the data acquisition phase to produce high-quality results without losing information. Optimal digitisation technologies are usually related to the desired technical specifications, size, state of condition, material, texture, location, accessibility, and required accuracy. These considerations are incorporated into the operational schema developed during the study. An object’s material or materials adds a further dimension of information and complexity. Representing these traits in 3D can pose significant challenges (Fig. 1).

Fig. 1.
figure 1

Information example indicating the degree of complexity – detailed breakdown (Complexity Radar Pie).

How is Complexity Connected to Technology?

It is a fact that some data capture technologies and recording methods are more suitable for specific applications than others (i.e., for special object investigations in the lab - such as computer tomography). Selection of the data acquisition technology such as hardware, and software are usually related to the desired stakeholder requirements, technical specifications, size, complexity, material, texture, accessibility, and required accuracy. Therefore, the report describes mainstream technologies used for the 3D documentation of CH tangible assets, in terms of the degree of complexity.

Impact of Complexity on Quality

A comprehensive understanding of object complexity is crucial as it has a high impact on various aspects of 3D digitisation. The use of the term in this field has remained vague with no clear definition, subjective methodology of calculating or apparent connection to quality, purpose-of-use, or other imposed restrictions. In other words, there was a gap in the collective understanding of the data acquisition project and object complexity as a decision support tool.

However, the complexity of 3D digitisation in CH as a value of its own cannot be a matter of subjective estimation. Still, it can be defined after the stakeholder requirements are determined, the project specifications are set, the object’s location and environmental conditions are known, and the object is defined. Any definition of object complexity should have the following characteristics:

  • It refers to both 3D data capture and data processing point cloud/modelling,

  • It is calculated objectively,

  • It is estimated before the data acquisition phase,

  • It connects quality, technology, and the purpose of use,

  • It provides alerts and limits to recording and processing phases,

  • It offers a meaningful tool for planning both the data acquisition and the 3D modelling process.

The process requires a focus on the complexity of the object and the set-up of the 3D digitisation project. Well above the object complexity, the digitisation process emphasises the complexity of the process itself.

The study’s online survey analysed the perception of complexity that experts have concerning the use of technologies. In their opinion, complexity is related to the degree and the kind of information they want to obtain, issues with software and budget, the challenges that the surface of a specific object presents, and the location of a monument.

Any definition of the complexity in 3D digitisation of CH assets should consider the following parameters:

  • The stakeholder’s requirements, including total budget and time duration,

  • The definition of the object and its detailed description,

  • The location of the object and the environmental conditions during the time of the documentation,

  • Multidisciplinary expertise available for the documentation,

  • The data acquisition equipment and software are available,

  • The knowledge for hardware and software to be used and are available for the pre-processing of the scanned 2D (images) and 3D data (3D point clouds).

Parameters of Complexity

The report illustrates the study’s research into the complexity and the potential issues that would affect the complexity of a digitisation process, including stakeholder’s requirements, object, project, team, environment, hardware/software, and pre-processing. Each item is subcategorised into five levels (Fig. 1).

4 Exemplification of Complexity

During the study, the set of parameters determining levels of complexity and related more broadly to quality were considered in the context of 43 cases (25 Immovable and 18 Movable). Annex 1 of the final report provides outline descriptions of each selected case.

This work has supported the development of a catalogue of data acquisition technologies and their output formats. Other contextual taxonomies have been developed for Movable and Immovable heritage, based on UNESCO’s World Heritage conventions and recommendations.

When building a taxonomy of complexity for Movable Heritage, we must consider each unique element (object-specific) of the question to refine and define the material. An essential requirement for the holistic digitisation of an asset refers to collecting data regarding these factors and accurately representing them. Key reference examples presented in the Final Report include:

Movable heritage:

  • The Antikythera Mechanism, National Archaeological Museum of Athens,

  • Untitled object: Museum of Contemporary Arts Thessaloniki, Greece,

  • Neolithic Figurine of Dispilio Lake Settlement, Greece.

Immovable heritage:

  • Asinou Church, Cyprus and World Heritage Site,

  • Cologne Cathedral, Germany and World Heritage Site,

  • Roussanou Monastery, Meteora Complex, Greece.

5 Parameters that Determine Quality

As the terms ‘complexity’ and ‘quality’ are used without a precise definition, this presents a significant challenge since tangible CH is exceptionally diverse.

Quality may comprise of different parameters such as the degree of detail, the geometric accuracy of the 2D and 3D shape, the spectral, scale and texture, material properties and chemical composition, and structural health monitoring status. These parameters can be combined in the following categories: a) Geometry; b) Image; c) Material; and d) Structural Health Monitoring.

Quality parameters refer to different stages of the 3D digitisation process and vary depending on the type of tangible CH and the equipment and methodology used. The possible purposes or uses of the resulting 3D material also determine different combinations and levels of those parameters to identify the minimum level of quality that fits the definition.

From the study’s survey responses, the top three parameters of quality categorised as the most important by respondents to ensure quality in the digitisation process were: a) surface conditions, b) quality of images, and c) environmental conditions.

Quality is a fundamental component of the 3D digitisation in CH, and it is an essential challenge since tangible CH hand- or natural-made structures are remarkably different.

The possible uses for the resulting 3D material also determine different combinations and levels of those parameters to achieve the minimum level of quality that fits the definition. It is also essential to distinguish the differences between data accuracy (as an acceptable margin of error), precision and resolution regarding the geometry. Accuracy refers to the closeness of a measured value to a standard or known value. Dimensional precision is a measurement of the repeatability, or consistency, of that measurement (Fig. 2). Quality parameters may also comprise the degree of detail, the geometric accuracy of the 3D shape, or the fidelity of the capture of colour/texture. The report puts all relevant parameters into three main categories: Geometry, Radiometry, Completeness and connects these parameters to Complexity, as discussed above.

Fig. 2.
figure 2

Information example indicating the degree of quality – detailed breakdown (Quality Radar Pie Chart).

6 Standards and Benchmarks

One conclusion of the study’s 49 interviews with key professionals in this domain is that there are no standards for planning, organising, setting up and implementing a 3D data acquisition project. Some experts mentioned the need to distinguish between the standards available for the management, administration of projects, safety, health and accessing the object/site for the personnel, the movement of the objects and the standards available for the data.

The report analyses most usually employed formats, along with a full discussion of the two focus area formats, terrestrial laser scanning/3D modelling and photogrammetry/digital photography. It also elaborates on the distinctions to be made between proprietary and open-format data limitations (minimum or maximum), and on judging data correctness in the absence of international protocols for data quality assurance. An important observation is that formats evolve as users and developers identify and incorporate new functionalities.

7 Identification of Gaps, Additional Formats, Standards, Benchmarks, Methodologies, and Guidelines

There are no guidelines on ways and minimum amounts of data to be collected or the quality to be achieved during data acquisition, which entirely depends on the stakeholder requirements. There appears to be little common understanding among the international multidisciplinary teams regarding what 2D/3D digital data acquisition standards means, as well as the obsolescence when new software does not provide backwards compatibility with older file formats. Also, open-source software communities may withdraw support for older formats, if these are no longer generally needed by the community. Obsolescence can also be accidental: both businesses and open-source communities can be led into erroneous practices for different reasons. Digitisation can generate a considerable amount of original and post-production data. When defining a project, it is crucial to understand the stakeholder requirements about the various production file formats to avoid inconsistent deliverables and inoperable proprietary data sets. There are hundreds of different file formats, noting that terrestrial laser scanners, for example, produce raw data in a variety of formats. Proprietary formats, such as TIFF or JPG, are seen as robust; however, these formats will ultimately be susceptible to upgrade issues and obsolescence. Furthermore, open-source formats can be seen as being neutral, non-reliant on business models for their development; however, they can also be seen as vulnerable to the susceptibilities of the communities that support them.

At the novice level, or for those with limited expertise, it cannot be ignored that different and more basic forms of guidance may be required to promote skills that enable widespread 3D digitisation in Europe. Some of the key questions have been beyond that and around the conceptual framework needed to address the use cases for digital dioramas, including by adding depth to the current 2D images, and by embedding one or more canvases within a 3D scene (e.g., multiple paintings or texts, or music/liturgy associated with a cathedral, temple, amphitheatre, or interior of a suitable model). However, with growing user and institutional demands, technical developments, and examples of advanced research collection and integration of virtual resources (e.g., Sketchfab, Smithsonian3D, 3DHOP, Potree, ScanTheWorld, Clara.io, morphosource.org, exhibit.so, hubs.mozilla.com, sayduck.com, Europeana.eu, etc.), there is a pressing and urgent need for a technical specification to ensure interoperability and longer term sustainability. Therefore, the plan for different multi- and interdisciplinary expert groups such as the IIIF 3D Technical Specification Group, CEN and ISO Technical Committees are to continue a collaborative approach to clarifying and specifying interoperable frameworks for 3D data, including common ways to:

  • annotate 3D media of various types into a shared canvas space, with commentary,

  • combine 3D media with audio-visual content within a shared space,

  • specify the presentation (placement, orientation, and contextualization) of 3D media,

  • embed (extend) in 3D the time, material and story dimensions (as a 4th, 5th and 6th dimension).

8 Uncertainty

Recognising the challenges and lack of consensus on the expression of uncertainty in measurement, different organisations worldwide have collaborated with the world’s highest authority in metrology, the Comité International des Poids et Mesures (CIPM), to develop a more workable definition. Uncertainty concerning the complexity and quality in data acquisition is discussed in further detail in the context of the expression of quality 3D digitisation in the report.

9 Forecast Impact of Future Technological Advances

Expected advancements in 2D/3D data acquisition software combined with artificial intelligence algorithms in different devices will make 3D digitisation easier, faster, more accurate, and more informative. The automatic compilation of different data types from various devices and manufacturers, the extraction and recognition of geometrical features, materials and environmental issues will create new challenges and impose greater demands. Development in this area will likely require new competences, specialised expertise and training. New standards, regulations and international accepted methodologies for data acquisition will be required.

Moreover, automatic compression and data transfer through 5G, 6G and strong Internet connections with many gigabytes of bandwidth from the field to the cloud will soon be in place to enhance archiving, real time global use and long-term availability and preservation. Guidelines for the CH domain will be needed on future formats for data, metadata and paradata, ensuring interoperability and data longevity. For analytics, blockchain, cloud and mobile computing, ontologies, Internet of Things (IoT), aerial and terrestrial LiDAR, and machine learning are just a few technologies that have transformed the construction industry and will undoubtedly impact the CH sector very soon. The increased interest in A/V/MX Reality, UAV s, Artificial Intelligence/Machine Learning, cloud and mobile computing will enable these new systems to play an indispensable role in the management, documentation, modelling, conservation, interpretation and protection of CH. Consequently, the development of these systems will have a direct impact on the CH industry (i.e., Virtual Museum, Virtual Sites, Smart Cities, 3D- digital libraries, fabrication and eArchiving). The report explores the potential of these technologies in more detail alongside that of open data, Heritage Building Information Modelling (HBIM) and the digital twin.

10 3D Digitisation Process Complexity

The complexity cannot be estimated subjectively; it can be defined only after all measurements of the object are conducted, which means that object complexity is not useful for 3D digitisation planning and decision-making. Likewise, its neutrality to the intended use re renders it impractical for choosing the best technology or setting up the technical specifications for 3D digitisation. Regardless of the definition applied, an indication of complexity can only occur after the 3D digitisation of a CH object has been completed. This includes obtaining all surface detail, texture characteristics and accuracy metrics and making a subjective guess at the object complexity. In practice, this would be a fruitless exercise. Therefore, it makes sense to reverse this thinking and start from the technical specifications which are dictated by the purpose of the 3D digitisation activity in question. In accordance with this argumentation, there is a need to shift attention from “Object Complexity” to “Model Complexity”. This means that the focus is not on the complexity of the actual object (which is connected only to the data capture phase) but on the complexity of the produced model, which is connected to the entire process of data acquisition and processing. This may look like a conceptual compromise, but the alternatives are worse. In effect, one would have to chose between ignoring this factor or making subjective guesses (Fig. 3).

Fig. 3.
figure 3

Moving from Object Complexity to Process Complexity.

We therefore define process complexity as the degree to which a process is difficult to analyse, understand or explain. One way to analyse it is to use a process control-flow complexity measure which examines the control-flow of consecutive processes and can be applied to data acquisition processes and workflows; then to evaluate the control-flow complexity measure to ensure that a high quality of results can be achieved on time. In this study, a ‘process’ is defined as a sustained series of events or actions that effects change through a series of stages. It resembles an interactive algorithm where elements such as the stakeholder requirements, the 3D object properties (such as professional expertise, equipment) interplay with environmental parameters (see) and reorganise, or rearrange entities such as activities, decisions, or contexts.

Therefore, the proposed Data Acquisition Process Management System (DAPMS) provides for the first time, a fundamental infrastructure to define and manage different processes in the area of 3D digitalisation of tangible CH objects. The proposed approach and the steps to be followed are illustrated in the sequence of graphics within:

  • Figure 4 provides an overview of the parameters to be considered, starting with the requirements for the 3D object by the owner/stakeholder and moving through aspects related to object description, project definition, team characteristics, environment, equipment, and pre-processing to the final deliverables

  • Figure 5 outlines the owner’s/stakeholder’s requirements, in terms of: (a) the tangible object, (b) the project related stakeholder requirements and (c) the quality of the final results to be achieved (minimum requirements needed for a public tender in 3D digitisation of tangible objects); for the latter, the requirements are grouped under main categories, referring to Geometry (2D, 3D), Image (texture, scale, spectral), Materials and Structural Health Monitoring.

  • Figure 6 illustrates the minimum information required for the description of the 3D CH tangible asset.

  • Figure 7 presents the main parameters to be considered for defining a 3D CH data acquisition project.

  • Figure 8 is about the often-underestimated role of human resources, putting emphasis on criteria to assess the level of qualifications and experience acquired through formal (professional) or other (amateur/hands-on) training.

  • The environmental conditions to be considered for a 3D data acquisition mission are presented in Figure 9 taking into account different possible locations where the project can be conducted.

  • As shown in figure Figure 10, the equipment has two broad categories: Software and Hardware. For the Software part, one may have to choose among open source, customised, commercial, or combinations of these. For Hardware, a key differentiation comes from whether the project is conducted in an indoor or an outdoor environment. The parameters/technologies to be considered for an indoor project are shown in Figure 10 and those for an outdoor project in Figure 11.

Figure 12 represents a logical dynamic graph for a 3D digitisation project and summarises visually the relation of complexity to quality. Figure 13 then shows the Radial Pie Chart tool developed by this study to represent the complexity of a digitisation project. This tool lies at the heart of our efforts to obtain a concrete measurement of complexity in 3D projects that can be used for practical purposes. Outside the direct remit of the study, a DAPMS Application (App) has been developed and at the time of writing this report is in the final stage of revision and testing in a series of 3D digitisation CH case studies (Fig. 14).

Fig. 4.
figure 4

Overview of the VIGIE2020/654 proposed DAPMS

Fig. 5.
figure 5

The Tangible Object as a parameter of complexity

Fig. 6.
figure 6

The Stakeholder Object’s description as part of the complexity

Fig. 7.
figure 7

The Project as a parameter of Complexity.

Fig. 8.
figure 8

The Human Resources as a parameter of complexity

Fig. 9.
figure 9

The Environmental Conditions as a parameter of complexity.

Fig. 10.
figure 10

Data Acquisition Techniques (equipment) for indoor and outdoor 2D/3D digitisation as a parameter of complexity.

Fig. 11.
figure 11

The Location of the tangible object as a parameter of complexity.

Fig. 12.
figure 12

Overview diagram illustrating the relation of complexity to quality.

Fig. 13.
figure 13

Radar chart depicting the parameters for complexity.

The Radial Charts for Complexity

Every complexity factor resulting from stakeholder requests including the assigned time horizon, total budget availability/priority and overall vision indirectly controls time allotment and all resources allocation in general, based on the digitisation purpose, desired level of detail, location, type, etc. (Fig. 15).

Fig. 14.
figure 14

Layers of the Stakeholder’s Requirements complexity parameter

Fig. 15.
figure 15

Layers of the Pre-processing complexity parameter

Digital CH Data Preprocessing requirements tend to vary significantly, depending on the scalability associated with the data to be acquired (often recorded from different sensors, as part of disparate sources, in different formats etc.). This in turn imposes changes in demand for Data Consolidation/Registration (collection, selection, merging or integration), Cleaning (missing value imputation, noise control), Transformation (normalisation, aggregation/discretisation) and Reduction (decreasing number of variables/cases, balancing skewness); each coming with additional hardware and software implications (Fig. 16).

Fig. 16.
figure 16

Layers of the Software and Hardware Equipment complexity parameter

DCH software requirements tend to differ in terms of reliability, operability, compatibility, maintainability, quality of results, security etc.

The demands in computational power, bandwidth, memory, time, and cost are, however, always determined in relation to the corresponding hardware constraints. DCH data acquisition alone (multi-sensing), can call for considerable variance in hardware selection (cameras, scanners, drones etc.). Additionally, demands in storage capacity (cache/physical memory, disk drive and partitions, cloud capacity, etc.), processing power (operation/network configurations) and representation (monitors, printers etc.) are often determined ad hoc. The constraints in computational power, bandwidth, memory, time, and cost restrictions are always decided in relation to the corresponding software demands (Fig. 17).

Fig. 17.
figure 17

Layers of the Environment complexity parameter

Environmental conditions (controlled or not) that may be perceived as contributing factors of complexity are included here. Both long-term (climate) conditions known to interfere with 3D data acquisition in general, such as rain, snow, wind, frost, fog, and sunshine, as well as physical measurements that become critical in reporting, such as temperature, humidity, barometric pressure, wind speed/direction, air pollution, etc. are taken into consideration (Fig. 18).

Fig. 18.
figure 18

Layers of the Team complexity parameter.

The Team/expert parameter incorporates all complexity factors associated with personnel grouping (team formation, communication, interaction and collaboration) including HR responsibility and accountability (Fig. 30). This ranges from user qualifications and corresponding worldwide recognised certification, licenses, and equipment/infrastructure distribution, to interpersonal coordination together with quality assurance implications in the field (Fig. 19).

Fig. 19.
figure 19

Layers of the Project complexity parameter.

The Project parameter includes all complexity factors pertaining to digital CH project planning, performance monitoring and management (Fig. 29). This includes setting up an integrated management framework to effectively share resources, experience, knowledge and expertise in pursuit of collective intelligence, subjected to any physical, operational, technical or financial constraints and logistics (Fig. 20).

Fig. 20.
figure 20

Layers of the Object complexity parameter.

Complexity factors stemming from object attributes or specifications (Fig. 28), including states of conditions, physical, chemical, and functional properties as well as dimensions, classifications, permissions for transportation and any other object-specific concerns (health and safety, legal, ethical etc.) regulate the digitisation process. An increasing requirement of the CH community and corresponding research institutions (such as at the of writing this report running H2020 MSCA ITN CHANGE52 project) relates to the fidelity of colours, ranging from the usual colour calibration within an image-based modelling pipeline, to more demanding reflectance measurements such as light-material interactions. Such a requirement is a novel complexity gap for the 3D modelling pipeline, including the visualisation step. At the moment of writing this report, there is still no universal consensus on the best format for rich colourimetry measurements (Fig. 21).

The Radial Charts for Quality

Image quality in DCH is often defined by spectroscopic features achieved via theoretical, experimental, and numerical techniques that strive to meet multi-objective photometric criteria (spectral regions). These include Absorbance, Transmittance, and Reflectance levels mapping to particular source, range, wavelength and frequency configurations (Fig. 22).

Fig. 21.
figure 21

Layers of the Spectral image quality parameter.

Fig. 22.
figure 22

Layers of the Scale image quality parameter.

Quality in digital CH is often perceived as an indication of potential detail in an image, referring to the relative difference in size (or distance) between the image and the (radiometric) features represented on the ground. The quality of the calculated scale depends on the accuracy of the measured distance, as well as the spatial resolution (pixel ratio), affecting color range and (bit) depth (Fig. 23).

Fig. 23.
figure 23

Layers of the Texture image quality parameter.

Image quality in DCH digitisation often comes down to realistic 3D visualisations, employing sufficiently detailed rendering techniques, to support object representation in multi-dimensional space. That is, calculating and adjusting textures based on recorded physical material characteristics such as opacity, contrast, and granularity, to a point where external structure approximations reflect the desired shape accuracy and color depth (Fig. 24).

Fig. 24.
figure 24

Layers of the 3D geometry quality parameter.

Similarly, with 2D attributes, those same quality factors could concern the 3D aspects of geometry, for when generating high-resolution point clouds via specialised equipment (multi-view cameras, depth sensors, TOF, etc.), often calling for advanced signal processing tools and (semi-automated) modeling practices. In cases of complex background or textures, 3D moving objects, and severe occlusions, relative measures might dictate computationally intensive self-calibration/registration and synchronisation methods (Fig. 25).

Fig. 25.
figure 25

Layers of the 2D geometry quality parameter.

A substantial subset of quality factors relating to computational geometry, such as accuracy and precision, may coincide with 2D attributes that could be efficiently represented on a coordinate plane. Relative measures are often estimated with respect to requirements in point density and corresponding (lack of) completion, with enhanced capturing resolution in mind (Fig. 26).

Fig. 26.
figure 26

Layers of the Structural Health Monitoring quality parameter.

Another important DCH quality factor is the extent to which the digitisation process responds to adverse structural changes, looking for structural reliability and life-cycle management. This implies a meticulous condition assessment that goes beyond common compositional analyses to appropriately cover states of conservation, connectivity, foundation strength/integrity and material quality for large-scale built objects, monuments, and sites (Fig. 27).

Fig. 27.
figure 27

Layers of the Material quality parameter.

All quality aspects in answer to the complexity imposed by the characteristics of the material(s) involved such as individual strength attributes like yield, fatigue, tensile or toughness could be directly or indirectly, individually, or jointly interacting with the overall quality of the digitisation process. To mention a few, these include chemical composition, moisture, corrosion, carbonation, resistance and porosity.

11 Conclusion

This unique study paid special attention to the fact that the 3D digitisation of movable and immovable CH can be an exceptionally complex process with numerous factors limiting the object’s eventual quality. Parameters such as budget, time available, location and object’s conditions, accuracy and precision are significant in setting the production effort and output standard. This issue is crucial for CH as there are no internationally recognized standards or guidelines for planning, organising, setting up and implementing a 3D data acquisition project. As acquisition technologies and software systems become increasingly accessible, with photorealistic renderings now commonplace, it is even more crucial to understand the physics behind the hardware, the fundamentals of data capture and processing methodologies.

The complexity of a 2-/3D digitisation project can be determined after factors such as the stakeholder requirements, project specifications, personnel qualifications, the type and object’s location and corresponding environmental conditions, the equipment to be used, the real object and the pre-processing software are defined. These issues will inform the level of production effort. The determination of quality can comprise the degree of detail, the precision and resolution of the geometric accuracy of the 3D shape, or the fidelity of capturing colour/texture. This EU study -for the first time- demonstrates that complexity and quality are fundamental considerations in determining the necessary and required effort for a digitisation project, including the needed high grade or value of the output.

Moreover, highly skilled and competent personnel in place, future technological advancements, such as the integration of more efficient artificial intelligence algorithms for the automatic merge of different point clouds from different sensors, including recognition and extraction of geometrical features in faster and more accurate sensors, together with greater computing power directly linked with high bandwidth internet connections such as 5G on cloud computing infrastructures, will surely result in improvements in the process of capturing and processing 2–/3D data. This will make it easier to increase the quality of data acquisition results in the fastest possible period, work with larger data volumes and bigger 3D models of higher grade and finally to contribute in the implementation of the EU DigitalDay2019 declaration.