1 Introduction

When an alleged criminal incident occurs in Scotland, the incident is reported initially to the police that gathers initial information and will contact the Crown Office and Procurator Fiscal (COPFS). At this stage, COPFS will lead the investigation and, in collaboration with the police, will typically call for intelligence from the first responders attending the scene. A series of forensic strategy meetings will inform the investigative team enabling decisions on the appropriate deployment of practitioners. In complex situations, first responders and investigators may also call for the assistance of forensic science specialists in the early stages of the investigation (ACPO 2006). The multi-skilled team needs to become familiar with the scene of interest at the earliest stages of the investigation. Confidently making scene-related strategic decisions is imperative against a backdrop of environmental and time constraints. Limiting access to the crime scene is necessary to minimise the risk of contamination, while the availability of experts for scene attendance may also be challenging (Streefkerk et al. 2009). When specialists are locally unavailable, the cost and feasibility of bringing in more remote experts may hamper their attendance. From the inception of an investigation, stakeholders with different backgrounds, namely prosecutors, police investigators, police liaison officers, crime scene investigators, forensic scientists, and other specialist practitioners, need to work together. This collaborative effort relies on effective communication of scientific material, which may encounter hindrances by unclear reporting (Howes 2017). All those involved may benefit from direct access to the scene. Nonetheless, examination protocols establish access rights and sequencing of experts, intending to optimise evidence retrieval whilst minimising the risk of contamination. Decisions regarding the investigation strategy rely extensively on the scene documentation provided by first responders and investigators who may have undertaken an initial scene ‘walk through’. These early access and examination sequencing decisions are essential to successfully detect, record and recover traces underpinning subsequent hypothesis formulation throughout the investigation. The training and experience of the crime scene examiners, in conjunction with the tools at their disposal and the temporal availability of the scene, decisively affect the collection and transfer of information, which, in turn, impacts the strategy adopted for the scene examination (Baber and Butler 2012; Kelty et al. 2011). Prompt documentation of the scene is thus crucial and commonly supported by sketching, photography and occasionally videography, which have proven to be effective scene-processing methods for over a century (Fish et al. 2013).

1.1 Related works

Two primary aspects characterise crime scene documenting and reporting: spatial data recording using sensors with established measurement uncertainty and imaging and visualisation of the acquired data using validated procedures. Previous studies evaluated 3D digitalisation technologies in indoor environments (Kang et al. 2020; Pintore et al. 2020), including but not limited to photogrammetry (Perfetti et al. 2017), laser scanning (Lehtola et al. 2017; Maboudi et al. 2017) and structured light (Wang et al. 2019) in a variety of fields. These recording methodologies have been featured in studies involving the recording of crime scenes for forensic purposes (Luchowski et al. 2021; Galanakis et al. 2021). A relatively small body of literature is concerned with indoor scene reconstruction (Galanakis et al. 2021; Sansoni et al. 2011; Larsen et al. 2021). For example, Roosje de Leeuwe (de Leeuwe 2017) reported that traditional crime scene documentation methods did not always provide enough contextual information relating to the positioning of evidentially valuable traces. The author demonstrated an approach using a combination of digital acquisition methods to deliver comprehensive scene information. Similarly, spatial understanding and contextualisation are paramount in fire scenes, where the location of fire patterns must be correctly understood (Almirall and Furton 2004). However, much of the research to date has not addressed the recording of these specific scenes. Fire scenes can be challenging to record because of the nature of the environment, which calls for the appropriate selection of recording methodology. Two suitable documentation methods are photogrammetry and lidar scanners (Perfetti et al. 2017; Remondino et al. 2005). Photogrammetry performs well with images captured in burnt environments, including in poorly lit environments where a tripod may be necessary to capture long-exposure photographs (Yu et al. 2023). Lidar scanners provide an alternative not reliant on ambient lighting conditions. Strategies aimed at minimising contamination while ensuring the expected standard of scene documentation include employment of standard operating procedures (SOPs) and validated protocols (National Research Council 2009; Doak and Assimakopoulos 2007; Raneri 2018). The effectiveness of data obtained from any scene imaging method relies on appropriate visualisation techniques. Ebert et al. (2014) argue that traditional 2D displays suffer from some information loss caused by the 2D projection of captured 3D data. Tredinnick et al. (2019) pointed out that forensic practitioners preferred 2D software despite having the capability of operating in 3D. These cases corroborate the need for proper visualisation techniques. Recent developments in 3D rendering and hardware solutions may help address these issues. To operationally implement data of increasing size and complexity, dedicated rendering techniques (Dietrich et al. 2007; Karis et al. 2021; Luebke et al. 2003) and data transformation (Estellers et al. 2015) are required. Immersive Virtual Reality may represent a feasible solution to achieve adequate and intuitive visualisation of 3D environments. Modern head-mounted displays offer an elevated level of immersion and convey a great sense of presence (Ebert et al. 2014; Schuemie et al. 2001; Slater and Wilbur 1997; Bowman and McMahan 2007). Immersion is an essential factor in the sensory fidelity of virtual environment visualisation and plays a significant role in creating a solid sense of ‘presence’ in the virtual space (Darken et al. 2014). Such attributes have demonstrated an enhancement in environmental perception and spatial memory (Reichherzer et al. 2018; Ruddle et al. 2011; Reichherzer et al. 2021), as well as retention of memories (Mania et al. 2003). The scientific literature shows that Virtual Reality (VR) is not new to forensic science. Previous studies examined the implementation of this technology as an instrument to support a variety of applications, including but not limited to witness interrogations (Sieberth et al. 2019; Sieberth and Seckiner 2023), courtroom presentations (Ma et al. 2010; Sevcik et al. 2022; Sieberth et al. 2021; Reichherzer et al. 2022; Wang et al. 2019), educational purposes (Kader et al. 2020; Khalilia et al. 2022; Jani and Johnson 2022; Drakou and Lanitis 2016), forensic examination of injuries (Koller et al. 2019) and evidence (Guarnera et al. 2022), and training of forensic practitioners (Mayne and Green 2020; Karabiyik et al. 2019). However, much of the research has not considered virtual reality as an operational tool for forensic investigations. The potential use of VR to augment, enhance or replace the use of a limited number of 2D photographs of a scene merits exploration. This technology is particularly valid where available images fail to provide a complete picture of the investigated environment, impeding the development of scientifically robust hypotheses.

This paper is an extension of previous work that focused on the feasibility of 3D digital reconstruction to visualise in VR. We demonstrate how the exploitation of standard photographic equipment currently in use by crime scene investigators (CSIs) in the United Kingdom (Robinson and Richards 2010; PICS 2019), in conjunction with structure-from-motion (SfM) software, can produce VR representations of the scene of interest. The specific objective of this study was to explore streamlining the documentation process to enhance decision-making at the early stages of a forensic investigation. Using a fire scene as a case study, we present the design and implementation of a digitalisation pipeline and visualisation application. The VR case study was evaluated by fire investigators from two organisations as a briefing tool for a forensic strategy meeting against traditional 2D photography of the same scene.

The use of an immersive VR solution capitalises on well-established consumer-grade technologies to document the sites of interest. This enables the provision of high-resolution reconstructions to local and remote experts whilst limiting the financial cost. In summary, our approach consists of:

  • The application of a standard operating procedure to image indoor fire scenes exploiting structure-from-motion photogrammetry.

  • The development of the visualisation software for standalone VR headsets, to allow remote practitioners to visualise the digital scans promptly.

  • The assessment of the proposed workflow as an integrative tool supplied to forensic science practitioners at forensic strategy meetings (FSMs).

2 Methods

2.1 Data acquisition strategy

For traces to be considered reliable and pertinent for inclusion in an investigation, they must satisfy the minimum standards of authenticity, integrity, validity, and quality. These are essential criteria to ensure the traces are trustworthy and can confidently support the investigation. Data backing an FSM, including images, must meet similar requirements, with considerable attention toward the timeliness of acquisition. A simulated fire scene was built in a domestic dwelling in the Guldborgsund municipality (Denmark) to test the proposed imaging to VR methodology in a realistic scenario. The experts of the Special Crime Unit of the Danish Police (Nationale Kriminaltekniske Center, NKC) designed, executed and curated the scene setup portraying a complex and plausible scenario (Fig. 1). Operating in a controlled environment secured a complete understanding of the events, enabling comparison of the hypotheses against ground truth data (Reichherzer et al. 2018).

Fig. 1
figure 1

Frames extracted from the footage documenting the pre-fire events and the fire development at the scene

A range of image capture devices was used (Table 1) on the same day following fire extinguishment to record the room of interest (Fig. 2) to ensure minimal scene variation.

Fig. 2
figure 2

Top-down orthographic view of the reconstructed area

Table 1 Imaging techniques executed at the simulated scene

The acquisition methods produced data in the format of a dense point cloud (RTC360) and 3D mesh (DSLR). Despite being technically capable of producing similar spatial data, specialised staff operated the Matterport Camera to capture the 360 panoramic images as part of the briefing material for the workflow assessment. Several digitalisation techniques can enable the creation of 3D meshes, including but not limited to photogrammetry from traditional stills and 360 images (Barazzetti et al. 2018), videogrammetry, lidar scanners and structured light scanners. To construct a systematic workflow of acquisition and integration of the spatial data, we selected a standard operating procedure employing fisheye lenses for rapid and systematic documentation of indoor scenes (Yu et al. 2023), aimed at exploiting structure-from-motion photogrammetry. Figure 3 presents the overall data flow.

Fig. 3
figure 3

Data flow and corresponding domains

The product of the photogrammetric image acquisition of the scene consisted of 402 RAW images produced using a Canon EOS 5D Mark III with a Canon EF8-15 mm f/4 L fisheye USM lens (Rinaldi et al. 2021). The camera operated in Aperture Priority mode with an F-stop value of f/10, a constant ISO speed of 100 and a variable shutter speed established by the camera. Upon acquisition completion, the photographs were downloaded from the camera memory card onto a laptop and then transferred through a Virtual Private Network (VPN) onto a secure repository hosted in a trusted environment. Next, the photographs were converted into jpg files using Adobe Bridge (version 12.0.3.270, Adobe Inc., San Jose, California, USA) to facilitate photogrammetric computation. The 3D geometry and textures were generated on a workstation within the same secure network. The photogrammetric reconstruction was carried out in Agisoft Metashape (version 1.8.1.13915, Agisoft LLC, St. Petersburg, Russia) using the default “High” settings for alignment and mesh generation. Subsequently, the decimation of the polygon number to the target face count (750k) allowed the requirements of the VR headset of choice to be met (Meta 2023). The resulting meshed model was scaled using the “Scale Bars” tool in Agisoft Metashape, against the measurements performed on the RTC360 laser scan. The report generated by the photogrammetric software indicated a residual error of 2.6 mm, obtained as RMS of the standard deviations of four control scale bars. The 3D mesh was exported as an FBX file and processed into the 3D engine Unity (version 2021.3.5f1 LTS, Unity Technologies, Copenhagen, Denmark). Subsequently, the virtual reconstruction was packaged into a Unity Addressable catalogue and published on the dedicated private cloud storage. The repository was housed within the same trusted environment and retrieved from dedicated designed VR software. Table 2 presents a breakdown of timings for the data acquisition and computation phases. Whilst data transfer was highly dependent on the internet bandwidth available at the incident location (b) along with the connection speed within the trusted environment (e), the generation timeframe using photogrammetry varied significantly according to the software version and hardware used (Appendix A, Table 10).

Table 2 Breakdown of photogrammetry workflow processes with 3D generation and preparation timeframes

2.2 VR framework development

To achieve spatial data visualisation, we developed Crime Scene VieweR (CSVR), a dedicated framework. Image integrity was ensured using a copy of the original pictures acquired at the scene for subsequent image processing. We developed an array of calibration tools to ascertain the geometrical fidelity (i.e. scale, positioning, and orientation) of the virtual reconstructions. This framework was developed with the Unity physics engine and consisted of two elements: a collection of editing tools and the executable application to navigate the reconstructions. The first element streamlined the data preparation stage granting independence from the digitalisation method. This component consisted of a custom Unity script providing an interface between the acquisition technique of choice and the visualisation app. This script facilitated the importation of the 3D data, adjusting the spatial data through geometric transformations, namely rigid translation and scale correction. Should a photogrammetric method provide the source data, a dedicated script enables the automatic extraction of source cameras, indicating the original location of the photograph with a placemark (Fig. 4). This feature aimed to supply the expert examiners with contextual information, presenting the original ground truth data to confirm or refute the geometry model in the VR representation.

Fig. 4
figure 4

First-person view of the site reconstructed using photogrammetry. Images produced in the photogrammetric acquisition are displayed as interactable floating images

Following the model placement, a catalogue collated the fitted data with relevant descriptive attributes, including the registration method and parameters used during the generation process. The structured data was moved to the custom storage system within the trusted environment to serve the authorised applications.

The second element of this framework was the VR navigation tool. The proposed solution targeted the OpenXR standard for Unity and was deployed on the standalone head-mounted display (HMD) Meta Quest 2 (https://www.meta.com) to confirm the feasibility of the proposed system. The headset featured a fast-switch LCD (Kim et al. 2022) with a resolution of 1920x3664 (773PPI) and a 90 Hz frame rate. The devised software enabled intuitive inspection of the virtual environments stored on the repository. Upon initiation, the VR application fetched the latest reconstructions available. Based on the taxonomy outlined in Boletsis and Cedergren (2019), we implemented two locomotion techniques to achieve effective navigation in VR, namely roomscale-based (real-walking) and teleportation-based (point-and-teleport). In the case of distributed teams, users could perform a collaborative examination by establishing a client-host connection via the provisioned menu, relying on a local or remote catalogue of digitalised environments. Figure 5 reveals the distributed server-client architecture.

Fig. 5
figure 5

Distributed Server-Client system architecture and corresponding domains

2.3 A case study: VR as an integrative briefing tool for forensic strategy meetings

The goal of this study was to investigate the feasibility of the proposed VR system in enabling confident decision-making by providing an operational advantage within the scenario of a forensic strategy meeting. We evaluated the effectiveness of the implementation of VR at the early stages of a fire scene investigation by analysing the hypotheses considered by practitioners when navigating the premises of interest. Additionally, the system usability was evaluated via a questionnaire provided to participants at the end of each trial session. Concerning fire scenes, the European Firestat Project (European Commission 2022) identified several variables highly influential to decision-making (Table 3). Similarly, the Scottish Police Authority Forensic Services (SPA-FS) routinely perform exercises that target fire investigation casework, hinging on the identification of factors akin to those outlined in the Firestat report (marked with an asterisk in Table 3), in addition to fire development and burn patterns.

Table 3 Tiered classification of the high-priority variables contributing to the decision-making process (European Commission 2022)

Training and competency testing of fire investigations are particularly challenging, owing to the costs and organisational effort required to create realistic test settings. In addition, contamination and degradation of the scenes over time critically hamper the repeatability of testing. Our study involved a collaborative exercise relating to the provision of briefing material to an FSM. The task was in two phases, the first using conventional 2D photography of the scene and the second using VR. Figure 6 summarises the data preparation for the exercise.

Fig. 6
figure 6

Data workflow of the briefing intelligence for the two exercise phases, from acquisition to deployment

The first part of the collaborative exercise tested the practitioner’s ability to identify the origin, cause, and fire development based on a traditional briefing using notes and photographs of the scene. The list of questions was the core of a competency test drawn up by the SPA-FS Forensic Operation Lead and Lead Forensic Scientist (Chemistry). Phases 1 and 2 of the exercise incorporated the same set of questions (Table 4) to gauge the performance of the practitioners. The second part of the exercise occurred four weeks later, where the same participants were exposed to the scene using VR and again completed the questionnaire. Responses of the two phases were anonymised and compared by SPA-FS Quality Leads and Lead Forensic Scientists.

Table 4 List of anticipated hypotheses deemed reasonable at the early stage of the examination

The material supplied in phase one comprised a briefing paper containing textual notes (Appendix A, Table 8) together with six photographs of the scene and a house plan (Appendix A, Fig. 10). The exercise setters cropped the Matterport 360 panoramic images to obtain stills conforming to usual documentation standards, with a printed size of 15.92x8.87cm and a resulting resolution of 1280x720. The fire investigators answered and submitted the questionnaire within one hour from the start of the exercise, echoing traditional FSMs. Four weeks later, the second part of the exercise was undertaken using VR reconstruction as the briefing material. All the participants stated they had no previous VR experience or had only used comparable headsets a few times for entertainment. At the beginning of each trial, each participant received informed consent and got instructed regarding the operation of the VR headset and its controls. The introduction enabled the users to gain confidence with the two locomotion systems, accounting for the first ten minutes of the trial. The participants practised with the tool by freely moving within the virtual scene to be analysed.

The VR setup used a specific version of the virtual reality application locally embedding the 3D reconstruction on the target VR HMD (Meta Quest 2). The source photographs were disabled to minimise visual clutter and facilitate understanding of the 3D reconstruction. The headset operated within a walkable area of 7.00\(\times\)4.50 m2, streaming a video to an adjacent laptop monitored by an experienced fire investigator facilitating the tests and recording the answers to the questionnaire provided by the practitioner as they explored the VR space (Fig. 7).

Fig. 7
figure 7

System setup of the test. Participants discussed their hypotheses whilst sharing the view from the headset with the facilitator recording the answers to the questionnaire

The shared view enabled the recording of statements whilst the investigator performed the investigation. After fifty minutes, the participant and their facilitator reviewed the transcript and provided a final report. The participants answered a questionnaire (Table 5) evaluating their VR user experience (Finstad 2010) and the appearance of the reconstruction (texture and geometry quality) and a subset of the questions taken from NASA-TLX (Hart and Staveland 1988) to probe subjective workload. The questions used a 5-point Likert scale, with 1 representing "Strongly Disagree" and 5 indicating “Strongly Agree”.

Table 5 User feedback questionnaire administered upon VR test conclusion

3 Results

A qualified checker from SPA-FS assessed the completed questionnaires submitted at the end of the two exercises. Questions Q1 and Q4 shown in Table 4 addressed the potential value provided by using VR (phase 2) over the conventional photographs (phase 1) of the scene. These questions specifically asked the participant to make expert judgements on the area(s) of interest from an investigative perspective (Q1) and on the potential origin, cause and spread of the fire (Q2), which would require an evaluation of burn and fire patterns remaining within the room. The participants’ responses received a score of 1 if they positively addressed the question and included at least one anticipated hypothesis listed in Table 4; otherwise, a score of 0 was assigned. Table 6 reports the resulting score for each participant.

Using traditional documentation methods, all fifteen fire investigators correctly identified one or multiple correct areas of interest. The responses could be categorised into the following groupings.

  • H1: Can you identify any area of interest for further examination or excavation?

  • H2: Do you feel able to competently propose a hypothesis for the origin of the fire?

  • H3: Do you feel able to competently propose a hypothesis for the cause of the fire?

  • H4: Do you feel able to competently propose a hypothesis for fire development?

Table 6 A binary score graded the participants at the end of the trials. The practitioners were awarded a score of 1 if they included the anticipated hypothesis in their responses, while a score of 0 was assigned if they did not incorporate the correct answer

During phase 1, all participants correctly identified the areas deemed worthy of further investigation (H1). Seven participants (47%) produced the correct hypothesis regarding the area of origin of the fire (H2). Two investigators (13%) correctly identified the cause (H3), whereas six participants (40%) adequately reconstructed the fire development (H4). During phase two the VR tool enabled all participants to correctly identify again the areas worthy of further examination or excavation (H1). Ten participants (67%) correctly identified the area of origin of the fire (H2), and three investigators (20%) inferred the cause of the fire (H3). Nine participants (60%) accurately estimated the reconstruction of fire development (H4). The scatter plot in Fig. 8 maps out the proportions of successful hypotheses formulation for phase one (photographs) and phase two (VR). Both reporting methodologies facilitated the accurate identification of areas of investigative interest (H1) by all participants. The VR tool was markedly better in facilitating the fire investigators’ ability to suggest the area of origin (H2) and in explaining fire development (H4).

As shown in Table 6, four more participants were able to provide an estimation of the area of origin using VR, whereas one participant (P13) who had previously located the area of origin using the photographs did not do so using VR. Similar improvements were seen when comparing the data relating to fire spread where four more participants were able to accomplish this using VR and one previously correct result (P8) was not able to do this using VR. Probably the most difficult task, determining the fire cause (H3), demonstrated an improvement from two participants in phase 1 to three participants in phase 2, with one participant changing their view (P6).

Fig. 8
figure 8

Left-diagram: scatter graph plotting ratios of acceptable hypothesis. Right-table: ratios of correctly formulated hypotheses

Figure 9 presents a box plot of the scores assigned by the participants to the user feedback questionnaire (Table 5). The usability metric reported that the VR met the needs of participants (F1), and the experience was positively judged (F2). The application scored high in ease of use (F3), and participants found its utilisation straightforward (F4). Participants suggested improvements in the texture quality (F5) and image clarity at higher resolution could be made. However, the geometry of the 3D reconstruction performed significantly better (F6). Question F7 revealed that participants perceived the task to have a moderately low level of mental effort, whilst question F8 indicated it was not physically demanding. Tests addressing this issue should enable the user to meet performance expectations independently, for instance, using an integrated system for taking notes within VR or employing a Dictaphone to record reflections when processing the scene. The VR application was judged extremely positively for not inducing stress or annoyance (F9).

Fig. 9
figure 9

Left-diagram: box plot representing responses of the participants to the post-trial test from Table 5. Right-table: mean and standard deviation of participants’ evaluation

4 Discussion

The results show the potential of integration for new ways of documenting and visualising crime scenes for forensic investigations. We noted an improvement in the hypotheses formulation (H2-H4) when the 3D reconstruction of the spaces was presented using virtual reality, suggesting a bolstering in the decision-making process over the traditional reporting methods. This result is consistent with previous VR visualisation of crime scenes for jury viewings (Reichherzer et al. 2018), resulting in increased environmental perception. The utilisation of VR navigation influenced the formulation of the hypotheses, allowing the users to integrate additional details into their findings or challenge previous conclusions. The visual quality of the system scored high in the user feedback questionnaire (Table 5, F5-F6), calling for texture resolution enhancement. There are two likely causes for this result. Firstly, the SOP promoted the acquisition of images using a fisheye lens, thus maximising coverage and overlap within the set of photographs. The disadvantage of using pictures with such a large field of view (FOV) for photogrammetric processing is a drawback in texture resolution. Secondly, fire scenes are particularly suitable for photogrammetry, owing to the multitude of distinct features provided by burnt patterns and sooty surfaces. Additional research is needed to better understand the effects of various types of indoor scenes and their suitability for different trace analyses (e.g. blood pattern analysis). During the trials, several notable resultant effects were observed. Two forensic practitioners produced additional sketches at the end of the VR exercise (Appendix A, Fig. 11). They could recall the layout of the examined room by drawing a top-down perspective of the scene, outlining the burn patterns, the objects, and the areas of interest for further investigation. These episodes emphasised an overall improvement in spatial understanding between the two test cases. This result contrasted with the difficulties mentioned at the end of the first phase when attempting to infer the position of some objects when using photographs and floor diagrams alone. Specifically, one participant commented in their answers, echoing the findings from a previous study (Reichherzer et al. 2021) "I am also having difficulty in orientating myself based on the diagram drawn".

The answers of the participants to the two exercises exhibit the production of fewer caveats when the navigation of the virtual spaces occurs. Further work is needed to assess the confidence level of their propositions, as higher confidence does not assure the correctness of hypotheses. The main limitation of the experimental design is the answer-recording methodology. As described in Sect. 2.3, the users could not address the questions firsthand when examining VR due to the separation from the surroundings. This visual constraint required a facilitator to transcribe the practitioner’s oral examination of the scene, sharing a video streaming from the user’s point of view. Albeit the auxiliary staff was deemed necessary, it represents the primary limitation of the presented approach. Therefore, it is imperative to consider the potential emotional arousal of the users. The stress level of the assessed investigators may be one of the driving factors of the achieved performance due to the presence of external examiners. The occlusion introduced by the headset could be offset by using a pass-through view to approximate a view into the real world, enabling interactions with note-taking devices. Similarly, this trial marked many participants’ first exposure to such a highly immersive technology. This novelty may have generated increased enthusiasm and curiosity, thus mitigating other potential drawbacks and maintaining an elevated level of engagement and enthusiasm. The excitement was particularly manifest in four trials, where the users requested to revisit the VR reconstruction for further assessment and note-taking after the evaluation. Despite being unaccustomed to virtual reality, the users evaluated the system as easy to operate (F1-F4). The ability to intuitively present complex data is essential to foster clear communication amongst stakeholders and fully exploit the available data. Compared to the systems listed in the literature, predominantly using PC-based virtual reality (PCVR), the presented approach envisages the utilisation of commercial-grade standalone HMD. The hardware choice aims at minimising technology disruption under penalty of rejection from users (Sheppard et al. 2020; Tipping et al. 2014). Ultimately, questions F7-F9 aimed at assessing the subjective workload. The users valued the task as moderately mentally demanding with low physical exertion. The former result is not unexpected, considering that scene examination is a taxing activity requiring a cognitive workload. As the test constituted a component of a proficiency assessment, the exercise may have placed significant emphasis on the participants, thereby subjecting them to pressure due to the subsequent evaluation. Notably, one participant declared that this methodology was highly taxing. The analysis of complete responses revealed that the degradation in performance occurred when the VR representation caused participants to change their minds away from an original correct hypothesis. These results, therefore, need to be interpreted cautiously, being mindful that graphical and immersive documentation tools may have been perceived as more persuasive, potentially bestowing undue weight on this medium (Bennett et al. 1999).

5 Conclusion

This paper presented a novel end-to-end procedure for digitising indoor scenes, focusing on a fire scene as a test case. The intent was to explore whether the exploitation of imaging methodologies, jointly with novel visualisation technology, would provide added value in challenging scenarios to facilitate decision-making in forensic strategy scene briefings. The proposed framework is source-agnostic, allowing the integration of spatial reconstructions from distinct acquisition methods. To demonstrate its viability, we developed a suite of tools to calibrate 3D reconstructions to enable agile integration of SfM photogrammetry and effectively exploit source data. The creation of the digital replica required reduced overhead time, allowing the presented approach to be deemed acceptable for employment at the early stages of the investigation. Ultimately, the efficacy of the approach was confirmed through its implementation during an inter-agency collaborative exercise. The trials focused on the performance evaluation of a team of forensic science practitioners. Their task was to examine an example scene using traditional documentation (2D photographs) and to integrate the examination with the proposed VR method. The addition of immersive virtual navigation enabled augmented spatial information compared to traditional 2D photographs.