Abstract
When attending a crime scene, first responders are responsible for identifying areas of potential interest for subsequent forensic examination. This information is shared with the police, forensic practitioners, and legal authorities during an initial meeting of all interested parties, which in Scotland is known as a forensic strategy meeting. Swift documentation is fundamental to allow practitioners to learn about the scene(s) and to plan investigative strategies, traditionally relying on word-of-mouth briefings using digital photographs, videos, diagrams, and verbal reports. We suggest that these early and critical briefings can be augmented positively by implementing an end-to-end methodology for indoor 3D reconstruction and successive visualisation through immersive Virtual Reality (VR). The main objective of this paper is to provide an integrative documentation tool to enhance the decision-making processes in the early stages of the investigation. Taking a fire scene as an example, we illustrate a framework for rapid spatial data acquisition of the scene that leverages structure-from-motion photogrammetry. We developed a VR framework that enables the exploration of virtual environments on a standalone, low-cost immersive head-mounted display. The system was tested in a two-phased inter-agency fire investigation exercise, where practitioners were asked to produce hypotheses suitable for forensic strategy meetings by (1) examining traditional documentation and then (2) using a VR walkthrough of the same premises. The integration of VR increased the practitioners’ scene comprehension, improved hypotheses formulation with fewer caveats, and enabled participants to sketch the scene, in contrast to the orientation challenges encountered using conventional documentation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
When an alleged criminal incident occurs in Scotland, the incident is reported initially to the police that gathers initial information and will contact the Crown Office and Procurator Fiscal (COPFS). At this stage, COPFS will lead the investigation and, in collaboration with the police, will typically call for intelligence from the first responders attending the scene. A series of forensic strategy meetings will inform the investigative team enabling decisions on the appropriate deployment of practitioners. In complex situations, first responders and investigators may also call for the assistance of forensic science specialists in the early stages of the investigation (ACPO 2006). The multi-skilled team needs to become familiar with the scene of interest at the earliest stages of the investigation. Confidently making scene-related strategic decisions is imperative against a backdrop of environmental and time constraints. Limiting access to the crime scene is necessary to minimise the risk of contamination, while the availability of experts for scene attendance may also be challenging (Streefkerk et al. 2009). When specialists are locally unavailable, the cost and feasibility of bringing in more remote experts may hamper their attendance. From the inception of an investigation, stakeholders with different backgrounds, namely prosecutors, police investigators, police liaison officers, crime scene investigators, forensic scientists, and other specialist practitioners, need to work together. This collaborative effort relies on effective communication of scientific material, which may encounter hindrances by unclear reporting (Howes 2017). All those involved may benefit from direct access to the scene. Nonetheless, examination protocols establish access rights and sequencing of experts, intending to optimise evidence retrieval whilst minimising the risk of contamination. Decisions regarding the investigation strategy rely extensively on the scene documentation provided by first responders and investigators who may have undertaken an initial scene ‘walk through’. These early access and examination sequencing decisions are essential to successfully detect, record and recover traces underpinning subsequent hypothesis formulation throughout the investigation. The training and experience of the crime scene examiners, in conjunction with the tools at their disposal and the temporal availability of the scene, decisively affect the collection and transfer of information, which, in turn, impacts the strategy adopted for the scene examination (Baber and Butler 2012; Kelty et al. 2011). Prompt documentation of the scene is thus crucial and commonly supported by sketching, photography and occasionally videography, which have proven to be effective scene-processing methods for over a century (Fish et al. 2013).
1.1 Related works
Two primary aspects characterise crime scene documenting and reporting: spatial data recording using sensors with established measurement uncertainty and imaging and visualisation of the acquired data using validated procedures. Previous studies evaluated 3D digitalisation technologies in indoor environments (Kang et al. 2020; Pintore et al. 2020), including but not limited to photogrammetry (Perfetti et al. 2017), laser scanning (Lehtola et al. 2017; Maboudi et al. 2017) and structured light (Wang et al. 2019) in a variety of fields. These recording methodologies have been featured in studies involving the recording of crime scenes for forensic purposes (Luchowski et al. 2021; Galanakis et al. 2021). A relatively small body of literature is concerned with indoor scene reconstruction (Galanakis et al. 2021; Sansoni et al. 2011; Larsen et al. 2021). For example, Roosje de Leeuwe (de Leeuwe 2017) reported that traditional crime scene documentation methods did not always provide enough contextual information relating to the positioning of evidentially valuable traces. The author demonstrated an approach using a combination of digital acquisition methods to deliver comprehensive scene information. Similarly, spatial understanding and contextualisation are paramount in fire scenes, where the location of fire patterns must be correctly understood (Almirall and Furton 2004). However, much of the research to date has not addressed the recording of these specific scenes. Fire scenes can be challenging to record because of the nature of the environment, which calls for the appropriate selection of recording methodology. Two suitable documentation methods are photogrammetry and lidar scanners (Perfetti et al. 2017; Remondino et al. 2005). Photogrammetry performs well with images captured in burnt environments, including in poorly lit environments where a tripod may be necessary to capture long-exposure photographs (Yu et al. 2023). Lidar scanners provide an alternative not reliant on ambient lighting conditions. Strategies aimed at minimising contamination while ensuring the expected standard of scene documentation include employment of standard operating procedures (SOPs) and validated protocols (National Research Council 2009; Doak and Assimakopoulos 2007; Raneri 2018). The effectiveness of data obtained from any scene imaging method relies on appropriate visualisation techniques. Ebert et al. (2014) argue that traditional 2D displays suffer from some information loss caused by the 2D projection of captured 3D data. Tredinnick et al. (2019) pointed out that forensic practitioners preferred 2D software despite having the capability of operating in 3D. These cases corroborate the need for proper visualisation techniques. Recent developments in 3D rendering and hardware solutions may help address these issues. To operationally implement data of increasing size and complexity, dedicated rendering techniques (Dietrich et al. 2007; Karis et al. 2021; Luebke et al. 2003) and data transformation (Estellers et al. 2015) are required. Immersive Virtual Reality may represent a feasible solution to achieve adequate and intuitive visualisation of 3D environments. Modern head-mounted displays offer an elevated level of immersion and convey a great sense of presence (Ebert et al. 2014; Schuemie et al. 2001; Slater and Wilbur 1997; Bowman and McMahan 2007). Immersion is an essential factor in the sensory fidelity of virtual environment visualisation and plays a significant role in creating a solid sense of ‘presence’ in the virtual space (Darken et al. 2014). Such attributes have demonstrated an enhancement in environmental perception and spatial memory (Reichherzer et al. 2018; Ruddle et al. 2011; Reichherzer et al. 2021), as well as retention of memories (Mania et al. 2003). The scientific literature shows that Virtual Reality (VR) is not new to forensic science. Previous studies examined the implementation of this technology as an instrument to support a variety of applications, including but not limited to witness interrogations (Sieberth et al. 2019; Sieberth and Seckiner 2023), courtroom presentations (Ma et al. 2010; Sevcik et al. 2022; Sieberth et al. 2021; Reichherzer et al. 2022; Wang et al. 2019), educational purposes (Kader et al. 2020; Khalilia et al. 2022; Jani and Johnson 2022; Drakou and Lanitis 2016), forensic examination of injuries (Koller et al. 2019) and evidence (Guarnera et al. 2022), and training of forensic practitioners (Mayne and Green 2020; Karabiyik et al. 2019). However, much of the research has not considered virtual reality as an operational tool for forensic investigations. The potential use of VR to augment, enhance or replace the use of a limited number of 2D photographs of a scene merits exploration. This technology is particularly valid where available images fail to provide a complete picture of the investigated environment, impeding the development of scientifically robust hypotheses.
This paper is an extension of previous work that focused on the feasibility of 3D digital reconstruction to visualise in VR. We demonstrate how the exploitation of standard photographic equipment currently in use by crime scene investigators (CSIs) in the United Kingdom (Robinson and Richards 2010; PICS 2019), in conjunction with structure-from-motion (SfM) software, can produce VR representations of the scene of interest. The specific objective of this study was to explore streamlining the documentation process to enhance decision-making at the early stages of a forensic investigation. Using a fire scene as a case study, we present the design and implementation of a digitalisation pipeline and visualisation application. The VR case study was evaluated by fire investigators from two organisations as a briefing tool for a forensic strategy meeting against traditional 2D photography of the same scene.
The use of an immersive VR solution capitalises on well-established consumer-grade technologies to document the sites of interest. This enables the provision of high-resolution reconstructions to local and remote experts whilst limiting the financial cost. In summary, our approach consists of:
-
The application of a standard operating procedure to image indoor fire scenes exploiting structure-from-motion photogrammetry.
-
The development of the visualisation software for standalone VR headsets, to allow remote practitioners to visualise the digital scans promptly.
-
The assessment of the proposed workflow as an integrative tool supplied to forensic science practitioners at forensic strategy meetings (FSMs).
2 Methods
2.1 Data acquisition strategy
For traces to be considered reliable and pertinent for inclusion in an investigation, they must satisfy the minimum standards of authenticity, integrity, validity, and quality. These are essential criteria to ensure the traces are trustworthy and can confidently support the investigation. Data backing an FSM, including images, must meet similar requirements, with considerable attention toward the timeliness of acquisition. A simulated fire scene was built in a domestic dwelling in the Guldborgsund municipality (Denmark) to test the proposed imaging to VR methodology in a realistic scenario. The experts of the Special Crime Unit of the Danish Police (Nationale Kriminaltekniske Center, NKC) designed, executed and curated the scene setup portraying a complex and plausible scenario (Fig. 1). Operating in a controlled environment secured a complete understanding of the events, enabling comparison of the hypotheses against ground truth data (Reichherzer et al. 2018).
A range of image capture devices was used (Table 1) on the same day following fire extinguishment to record the room of interest (Fig. 2) to ensure minimal scene variation.
The acquisition methods produced data in the format of a dense point cloud (RTC360) and 3D mesh (DSLR). Despite being technically capable of producing similar spatial data, specialised staff operated the Matterport Camera to capture the 360 panoramic images as part of the briefing material for the workflow assessment. Several digitalisation techniques can enable the creation of 3D meshes, including but not limited to photogrammetry from traditional stills and 360 images (Barazzetti et al. 2018), videogrammetry, lidar scanners and structured light scanners. To construct a systematic workflow of acquisition and integration of the spatial data, we selected a standard operating procedure employing fisheye lenses for rapid and systematic documentation of indoor scenes (Yu et al. 2023), aimed at exploiting structure-from-motion photogrammetry. Figure 3 presents the overall data flow.
The product of the photogrammetric image acquisition of the scene consisted of 402 RAW images produced using a Canon EOS 5D Mark III with a Canon EF8-15 mm f/4 L fisheye USM lens (Rinaldi et al. 2021). The camera operated in Aperture Priority mode with an F-stop value of f/10, a constant ISO speed of 100 and a variable shutter speed established by the camera. Upon acquisition completion, the photographs were downloaded from the camera memory card onto a laptop and then transferred through a Virtual Private Network (VPN) onto a secure repository hosted in a trusted environment. Next, the photographs were converted into jpg files using Adobe Bridge (version 12.0.3.270, Adobe Inc., San Jose, California, USA) to facilitate photogrammetric computation. The 3D geometry and textures were generated on a workstation within the same secure network. The photogrammetric reconstruction was carried out in Agisoft Metashape (version 1.8.1.13915, Agisoft LLC, St. Petersburg, Russia) using the default “High” settings for alignment and mesh generation. Subsequently, the decimation of the polygon number to the target face count (750k) allowed the requirements of the VR headset of choice to be met (Meta 2023). The resulting meshed model was scaled using the “Scale Bars” tool in Agisoft Metashape, against the measurements performed on the RTC360 laser scan. The report generated by the photogrammetric software indicated a residual error of 2.6 mm, obtained as RMS of the standard deviations of four control scale bars. The 3D mesh was exported as an FBX file and processed into the 3D engine Unity (version 2021.3.5f1 LTS, Unity Technologies, Copenhagen, Denmark). Subsequently, the virtual reconstruction was packaged into a Unity Addressable catalogue and published on the dedicated private cloud storage. The repository was housed within the same trusted environment and retrieved from dedicated designed VR software. Table 2 presents a breakdown of timings for the data acquisition and computation phases. Whilst data transfer was highly dependent on the internet bandwidth available at the incident location (b) along with the connection speed within the trusted environment (e), the generation timeframe using photogrammetry varied significantly according to the software version and hardware used (Appendix A, Table 10).
2.2 VR framework development
To achieve spatial data visualisation, we developed Crime Scene VieweR (CSVR), a dedicated framework. Image integrity was ensured using a copy of the original pictures acquired at the scene for subsequent image processing. We developed an array of calibration tools to ascertain the geometrical fidelity (i.e. scale, positioning, and orientation) of the virtual reconstructions. This framework was developed with the Unity physics engine and consisted of two elements: a collection of editing tools and the executable application to navigate the reconstructions. The first element streamlined the data preparation stage granting independence from the digitalisation method. This component consisted of a custom Unity script providing an interface between the acquisition technique of choice and the visualisation app. This script facilitated the importation of the 3D data, adjusting the spatial data through geometric transformations, namely rigid translation and scale correction. Should a photogrammetric method provide the source data, a dedicated script enables the automatic extraction of source cameras, indicating the original location of the photograph with a placemark (Fig. 4). This feature aimed to supply the expert examiners with contextual information, presenting the original ground truth data to confirm or refute the geometry model in the VR representation.
Following the model placement, a catalogue collated the fitted data with relevant descriptive attributes, including the registration method and parameters used during the generation process. The structured data was moved to the custom storage system within the trusted environment to serve the authorised applications.
The second element of this framework was the VR navigation tool. The proposed solution targeted the OpenXR standard for Unity and was deployed on the standalone head-mounted display (HMD) Meta Quest 2 (https://www.meta.com) to confirm the feasibility of the proposed system. The headset featured a fast-switch LCD (Kim et al. 2022) with a resolution of 1920x3664 (773PPI) and a 90 Hz frame rate. The devised software enabled intuitive inspection of the virtual environments stored on the repository. Upon initiation, the VR application fetched the latest reconstructions available. Based on the taxonomy outlined in Boletsis and Cedergren (2019), we implemented two locomotion techniques to achieve effective navigation in VR, namely roomscale-based (real-walking) and teleportation-based (point-and-teleport). In the case of distributed teams, users could perform a collaborative examination by establishing a client-host connection via the provisioned menu, relying on a local or remote catalogue of digitalised environments. Figure 5 reveals the distributed server-client architecture.
2.3 A case study: VR as an integrative briefing tool for forensic strategy meetings
The goal of this study was to investigate the feasibility of the proposed VR system in enabling confident decision-making by providing an operational advantage within the scenario of a forensic strategy meeting. We evaluated the effectiveness of the implementation of VR at the early stages of a fire scene investigation by analysing the hypotheses considered by practitioners when navigating the premises of interest. Additionally, the system usability was evaluated via a questionnaire provided to participants at the end of each trial session. Concerning fire scenes, the European Firestat Project (European Commission 2022) identified several variables highly influential to decision-making (Table 3). Similarly, the Scottish Police Authority Forensic Services (SPA-FS) routinely perform exercises that target fire investigation casework, hinging on the identification of factors akin to those outlined in the Firestat report (marked with an asterisk in Table 3), in addition to fire development and burn patterns.
Training and competency testing of fire investigations are particularly challenging, owing to the costs and organisational effort required to create realistic test settings. In addition, contamination and degradation of the scenes over time critically hamper the repeatability of testing. Our study involved a collaborative exercise relating to the provision of briefing material to an FSM. The task was in two phases, the first using conventional 2D photography of the scene and the second using VR. Figure 6 summarises the data preparation for the exercise.
The first part of the collaborative exercise tested the practitioner’s ability to identify the origin, cause, and fire development based on a traditional briefing using notes and photographs of the scene. The list of questions was the core of a competency test drawn up by the SPA-FS Forensic Operation Lead and Lead Forensic Scientist (Chemistry). Phases 1 and 2 of the exercise incorporated the same set of questions (Table 4) to gauge the performance of the practitioners. The second part of the exercise occurred four weeks later, where the same participants were exposed to the scene using VR and again completed the questionnaire. Responses of the two phases were anonymised and compared by SPA-FS Quality Leads and Lead Forensic Scientists.
The material supplied in phase one comprised a briefing paper containing textual notes (Appendix A, Table 8) together with six photographs of the scene and a house plan (Appendix A, Fig. 10). The exercise setters cropped the Matterport 360 panoramic images to obtain stills conforming to usual documentation standards, with a printed size of 15.92x8.87cm and a resulting resolution of 1280x720. The fire investigators answered and submitted the questionnaire within one hour from the start of the exercise, echoing traditional FSMs. Four weeks later, the second part of the exercise was undertaken using VR reconstruction as the briefing material. All the participants stated they had no previous VR experience or had only used comparable headsets a few times for entertainment. At the beginning of each trial, each participant received informed consent and got instructed regarding the operation of the VR headset and its controls. The introduction enabled the users to gain confidence with the two locomotion systems, accounting for the first ten minutes of the trial. The participants practised with the tool by freely moving within the virtual scene to be analysed.
The VR setup used a specific version of the virtual reality application locally embedding the 3D reconstruction on the target VR HMD (Meta Quest 2). The source photographs were disabled to minimise visual clutter and facilitate understanding of the 3D reconstruction. The headset operated within a walkable area of 7.00\(\times\)4.50 m2, streaming a video to an adjacent laptop monitored by an experienced fire investigator facilitating the tests and recording the answers to the questionnaire provided by the practitioner as they explored the VR space (Fig. 7).
The shared view enabled the recording of statements whilst the investigator performed the investigation. After fifty minutes, the participant and their facilitator reviewed the transcript and provided a final report. The participants answered a questionnaire (Table 5) evaluating their VR user experience (Finstad 2010) and the appearance of the reconstruction (texture and geometry quality) and a subset of the questions taken from NASA-TLX (Hart and Staveland 1988) to probe subjective workload. The questions used a 5-point Likert scale, with 1 representing "Strongly Disagree" and 5 indicating “Strongly Agree”.
3 Results
A qualified checker from SPA-FS assessed the completed questionnaires submitted at the end of the two exercises. Questions Q1 and Q4 shown in Table 4 addressed the potential value provided by using VR (phase 2) over the conventional photographs (phase 1) of the scene. These questions specifically asked the participant to make expert judgements on the area(s) of interest from an investigative perspective (Q1) and on the potential origin, cause and spread of the fire (Q2), which would require an evaluation of burn and fire patterns remaining within the room. The participants’ responses received a score of 1 if they positively addressed the question and included at least one anticipated hypothesis listed in Table 4; otherwise, a score of 0 was assigned. Table 6 reports the resulting score for each participant.
Using traditional documentation methods, all fifteen fire investigators correctly identified one or multiple correct areas of interest. The responses could be categorised into the following groupings.
-
H1: Can you identify any area of interest for further examination or excavation?
-
H2: Do you feel able to competently propose a hypothesis for the origin of the fire?
-
H3: Do you feel able to competently propose a hypothesis for the cause of the fire?
-
H4: Do you feel able to competently propose a hypothesis for fire development?
During phase 1, all participants correctly identified the areas deemed worthy of further investigation (H1). Seven participants (47%) produced the correct hypothesis regarding the area of origin of the fire (H2). Two investigators (13%) correctly identified the cause (H3), whereas six participants (40%) adequately reconstructed the fire development (H4). During phase two the VR tool enabled all participants to correctly identify again the areas worthy of further examination or excavation (H1). Ten participants (67%) correctly identified the area of origin of the fire (H2), and three investigators (20%) inferred the cause of the fire (H3). Nine participants (60%) accurately estimated the reconstruction of fire development (H4). The scatter plot in Fig. 8 maps out the proportions of successful hypotheses formulation for phase one (photographs) and phase two (VR). Both reporting methodologies facilitated the accurate identification of areas of investigative interest (H1) by all participants. The VR tool was markedly better in facilitating the fire investigators’ ability to suggest the area of origin (H2) and in explaining fire development (H4).
As shown in Table 6, four more participants were able to provide an estimation of the area of origin using VR, whereas one participant (P13) who had previously located the area of origin using the photographs did not do so using VR. Similar improvements were seen when comparing the data relating to fire spread where four more participants were able to accomplish this using VR and one previously correct result (P8) was not able to do this using VR. Probably the most difficult task, determining the fire cause (H3), demonstrated an improvement from two participants in phase 1 to three participants in phase 2, with one participant changing their view (P6).
Figure 9 presents a box plot of the scores assigned by the participants to the user feedback questionnaire (Table 5). The usability metric reported that the VR met the needs of participants (F1), and the experience was positively judged (F2). The application scored high in ease of use (F3), and participants found its utilisation straightforward (F4). Participants suggested improvements in the texture quality (F5) and image clarity at higher resolution could be made. However, the geometry of the 3D reconstruction performed significantly better (F6). Question F7 revealed that participants perceived the task to have a moderately low level of mental effort, whilst question F8 indicated it was not physically demanding. Tests addressing this issue should enable the user to meet performance expectations independently, for instance, using an integrated system for taking notes within VR or employing a Dictaphone to record reflections when processing the scene. The VR application was judged extremely positively for not inducing stress or annoyance (F9).
Left-diagram: box plot representing responses of the participants to the post-trial test from Table 5. Right-table: mean and standard deviation of participants’ evaluation
4 Discussion
The results show the potential of integration for new ways of documenting and visualising crime scenes for forensic investigations. We noted an improvement in the hypotheses formulation (H2-H4) when the 3D reconstruction of the spaces was presented using virtual reality, suggesting a bolstering in the decision-making process over the traditional reporting methods. This result is consistent with previous VR visualisation of crime scenes for jury viewings (Reichherzer et al. 2018), resulting in increased environmental perception. The utilisation of VR navigation influenced the formulation of the hypotheses, allowing the users to integrate additional details into their findings or challenge previous conclusions. The visual quality of the system scored high in the user feedback questionnaire (Table 5, F5-F6), calling for texture resolution enhancement. There are two likely causes for this result. Firstly, the SOP promoted the acquisition of images using a fisheye lens, thus maximising coverage and overlap within the set of photographs. The disadvantage of using pictures with such a large field of view (FOV) for photogrammetric processing is a drawback in texture resolution. Secondly, fire scenes are particularly suitable for photogrammetry, owing to the multitude of distinct features provided by burnt patterns and sooty surfaces. Additional research is needed to better understand the effects of various types of indoor scenes and their suitability for different trace analyses (e.g. blood pattern analysis). During the trials, several notable resultant effects were observed. Two forensic practitioners produced additional sketches at the end of the VR exercise (Appendix A, Fig. 11). They could recall the layout of the examined room by drawing a top-down perspective of the scene, outlining the burn patterns, the objects, and the areas of interest for further investigation. These episodes emphasised an overall improvement in spatial understanding between the two test cases. This result contrasted with the difficulties mentioned at the end of the first phase when attempting to infer the position of some objects when using photographs and floor diagrams alone. Specifically, one participant commented in their answers, echoing the findings from a previous study (Reichherzer et al. 2021) "I am also having difficulty in orientating myself based on the diagram drawn".
The answers of the participants to the two exercises exhibit the production of fewer caveats when the navigation of the virtual spaces occurs. Further work is needed to assess the confidence level of their propositions, as higher confidence does not assure the correctness of hypotheses. The main limitation of the experimental design is the answer-recording methodology. As described in Sect. 2.3, the users could not address the questions firsthand when examining VR due to the separation from the surroundings. This visual constraint required a facilitator to transcribe the practitioner’s oral examination of the scene, sharing a video streaming from the user’s point of view. Albeit the auxiliary staff was deemed necessary, it represents the primary limitation of the presented approach. Therefore, it is imperative to consider the potential emotional arousal of the users. The stress level of the assessed investigators may be one of the driving factors of the achieved performance due to the presence of external examiners. The occlusion introduced by the headset could be offset by using a pass-through view to approximate a view into the real world, enabling interactions with note-taking devices. Similarly, this trial marked many participants’ first exposure to such a highly immersive technology. This novelty may have generated increased enthusiasm and curiosity, thus mitigating other potential drawbacks and maintaining an elevated level of engagement and enthusiasm. The excitement was particularly manifest in four trials, where the users requested to revisit the VR reconstruction for further assessment and note-taking after the evaluation. Despite being unaccustomed to virtual reality, the users evaluated the system as easy to operate (F1-F4). The ability to intuitively present complex data is essential to foster clear communication amongst stakeholders and fully exploit the available data. Compared to the systems listed in the literature, predominantly using PC-based virtual reality (PCVR), the presented approach envisages the utilisation of commercial-grade standalone HMD. The hardware choice aims at minimising technology disruption under penalty of rejection from users (Sheppard et al. 2020; Tipping et al. 2014). Ultimately, questions F7-F9 aimed at assessing the subjective workload. The users valued the task as moderately mentally demanding with low physical exertion. The former result is not unexpected, considering that scene examination is a taxing activity requiring a cognitive workload. As the test constituted a component of a proficiency assessment, the exercise may have placed significant emphasis on the participants, thereby subjecting them to pressure due to the subsequent evaluation. Notably, one participant declared that this methodology was highly taxing. The analysis of complete responses revealed that the degradation in performance occurred when the VR representation caused participants to change their minds away from an original correct hypothesis. These results, therefore, need to be interpreted cautiously, being mindful that graphical and immersive documentation tools may have been perceived as more persuasive, potentially bestowing undue weight on this medium (Bennett et al. 1999).
5 Conclusion
This paper presented a novel end-to-end procedure for digitising indoor scenes, focusing on a fire scene as a test case. The intent was to explore whether the exploitation of imaging methodologies, jointly with novel visualisation technology, would provide added value in challenging scenarios to facilitate decision-making in forensic strategy scene briefings. The proposed framework is source-agnostic, allowing the integration of spatial reconstructions from distinct acquisition methods. To demonstrate its viability, we developed a suite of tools to calibrate 3D reconstructions to enable agile integration of SfM photogrammetry and effectively exploit source data. The creation of the digital replica required reduced overhead time, allowing the presented approach to be deemed acceptable for employment at the early stages of the investigation. Ultimately, the efficacy of the approach was confirmed through its implementation during an inter-agency collaborative exercise. The trials focused on the performance evaluation of a team of forensic science practitioners. Their task was to examine an example scene using traditional documentation (2D photographs) and to integrate the examination with the proposed VR method. The addition of immersive virtual navigation enabled augmented spatial information compared to traditional 2D photographs.
Availability of data and materials
The data that support the findings of this study are available at the University of Dundee. Restrictions apply to the availability of these data, which were used under license for this study. Data are available from the authors with the permission of the University of Dundee and the organisations taking part in the Udgård project (Rinaldi et al. 2021).
References
ACPO (2006) Murder investigation manual
Almirall JR, Furton KG (2004) Analysis and interpretation of fire scene evidence. CRC Press. https://doi.org/10.1201/9780203492727, www.taylorfrancis.com/books/9780203492727
Baber C, Butler M (2012) Expertise in crime scene examination: comparing search strategies of expert and novice crime scene examiners in simulated crime scenes. Hum Fact 54(3):413–424. https://doi.org/10.1177/0018720812440577/FORMAT/EPUB
Barazzetti L, Previtali M, Roncoroni F (2018) Can we use low-cost 360 degree cameras to create accurate 3d models? The international archives of the photogrammetry. Remote Sens Spatial Inform Sci XLII2(2):69–75. https://doi.org/10.5194/isprs-archives-XLII-2-69-2018
Bennett RB, Leibman JH, Fetter R (1999) Seeing is believing; or is it? An emperical study of computer simulations as evidence. Scholarship and Professional Work - Business 63. https://digitalcommons.butler.edu/cob_papers/63
Boletsis C, Cedergren JE (2019) VR Locomotion in the new era of virtual reality: an empirical comparison of prevalent techniques. Adv Human-Comput Interaction. https://doi.org/10.1155/2019/7420781
Bowman DA, McMahan RP (2007) Virtual reality: how much immersion is enough? Computer 40(7):36–43. https://doi.org/10.1109/MC.2007.257
Darken RP, Peterson B (2014) Spatial orientation, wayfinding, and representation. In: Handbook of virtual environments. September, CRC Press, p 493–518
Dietrich A, Gobbetti E, Yoon Se (2007) Massive-model rendering techniques: a tutorial. IEEE Comput Gr Appl 27(6):20–34. https://doi.org/10.1109/MCG.2007.154
Doak S, Assimakopoulos D (2007) How do forensic scientists learn to become competent in casework reporting in practice: a theoretical and empirical approach. Forensic Sci Int 167(2–3):201–206. https://doi.org/10.1016/J.FORSCIINT.2006.06.063
Drakou M, Lanitis A (2016) On the development and evaluation of a serious game for forensic examination training. In: 2016 18th mediterranean electrotechnical conference (MELECON). IEEE, pp 1–6, https://doi.org/10.1109/MELCON.2016.7495415, http://ieeexplore.ieee.org/document/7495415/
Ebert LC, Nguyen TT, Breitbeck R et al (2014) The forensic holodeck: an immersive display for forensic crime scene reconstructions. Forensic Sci Med Pathol 10(4):623–626. https://doi.org/10.1007/s12024-014-9605-0
Estellers V, Scott M, Tew K, et al (2015) Robust poisson surface reconstruction. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 9087. Springer, Cham, p 525–537, https://doi.org/10.1007/978-3-319-18461-6_42, http://link.springer.com/10.1007/978-3-319-18461-6_42
European Commission (2022) EU Firestat project: closing data gaps and paving the way for pan-European fire safety efforts: final report. Tech. rep., Publications Office of the European Union, https://doi.org/10.2873/778991, https://eufirestat-efectis.com/files/eufirestatproject-ET01223473AN1.pdf
Finstad K (2010) The usability metric for user experience. Interact Comput 22(5):323–327. https://doi.org/10.1016/j.intcom.2010.04.004
Fish JT, Miller LS, Braswell MC, et al (2013) Crime scene investigation. Crime scene investigation pp 1–433. https://doi.org/10.4324/9781315721910
Galanakis G, Zabulis X, Evdaimon T et al (2021) A study of 3D digitisation modalities for crime scene investigation. Forensic Sci 1(2):56–85. https://doi.org/10.3390/forensicsci1020008
Guarnera L, Giudice O, Livatino S et al (2022) Assessing forensic ballistics three-dimensionally through graphical reconstruction and immersive VR observation. Multimed Tools Appl. https://doi.org/10.1007/s11042-022-14037-x
Hart SG, Staveland LE (1988) Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol 52(C):139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
Howes LM (2017) Sometimes I give up on the report and ring the scientist: bridging the gap between what forensic scientists write and what police investigators read. Polic Soc 27(5):541–559. https://doi.org/10.1080/10439463.2015.1089870
Jani G, Johnson A (2022) Virtual reality and its transformation in forensic education and research practices. J Visual Commun Med 45(1):18–25. https://doi.org/10.1080/17453054.2021.1971516
Kader SN, Ng WB, Tan SWL et al (2020) Building an interactive immersive virtual reality crime scene for future chemists to learn forensic science chemistry. J Chem Educ 97(9):2651–2656. https://doi.org/10.1021/acs.jchemed.0c00817
Kang Z, Yang J, Yang Z et al (2020) A review of techniques for 3D reconstruction of indoor environments. ISPRS Int J Geo-Inform 9(5):330. https://doi.org/10.3390/ijgi9050330
Karabiyik U, Mousas C, Sirota D, et al (2019) A virtual reality framework for training incident first responders and digital forensic investigators. In: Lecture notes in computer science, pp 469–480, https://doi.org/10.1007/978-3-030-33723-0_38
Karis B, Stubbe R, Wihlidal G (2021) A deep dive into nanite virtualized geometry. Advances in Real-Time Rendering in Games course http://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Kelty SF, Julian R, Robertson J (2011) Professionalism in crime scene examination: the seven key attributes of top crime scene examiners. Forensic Sci Pol Manag: An Int J 2(4):175–186. https://doi.org/10.1080/19409044.2012.693572
Khalilia WM, Gombar M, Palkova Z et al (2022) Using virtual reality as support to the learning process of forensic scenarios. IEEE Access 10:83297–83310. https://doi.org/10.1109/ACCESS.2022.3196471
Kim C, Klement A, Park E et al (2022) High-ppi fast-switch display development for oculus quest 2 VR headsets. Digest Tech Papers–SID Int Symp 53(1):40–43. https://doi.org/10.1002/SDTP.15410
Koller S, Ebert LC, Martinez RM et al (2019) Using virtual reality for forensic examinations of injuries. Forensic Sci Int 295:30–35. https://doi.org/10.1016/j.forsciint.2018.11.006
Larsen H, Budka M, Bennett MR (2021) Technological innovation in the recovery and analysis of 3D forensic footwear evidence: structure from motion (SfM) photogrammetry. Sci Justice 61(4):356–368. https://doi.org/10.1016/J.SCIJUS.2021.04.003
Leeuwe de R (2017) The hiatus in crime scene documentation: visualisation of the location of evidence. J Forensic Radiol Imaging 8:13–16. https://doi.org/10.1016/j.jofri.2017.03.002
Lehtola VV, Kaartinen H, Nüchter A et al (2017) Comparison of the selected state-of-the-art 3D indoor scanning and point cloud generation methods. Remote Sens 9(8):1–26. https://doi.org/10.3390/rs9080796
Luchowski L, Pojda D, Tomaka AA et al (2021) Multimodal imagery in forensic incident scene documentation. Sensors 21(4):1407. https://doi.org/10.3390/s21041407
Luebke D, Reddy M, Cohen JD, et al (2003) Level of detail for 3D graphics. Elsevier, http://www.sciencedirect.com:5070/book/9781558608382/level-of-detail-for-3d-graphics
Ma M, Zheng H, Lallie H (2010) Virtual reality and 3D animation in forensic visualization. J Forensic Sci 55(5):1227–1231. https://doi.org/10.1111/j.1556-4029.2010.01453.x
Maboudi M, Bánhidi D, Gerke M (2017) Evaluation of indoor mobile mapping systems. GFaI Workshop 3D North East 2017 (20th Application-oriented workshop on measuring, modeling, processing and analysis of 3D-data) pp 125– 134
Mania K, Troscianko T, Hawkes R et al (2003) Fidelity metrics for virtual environment simulations based on spatial memory awareness states. Presence Teleoperators Virtual Environ 12(3):296–310. https://doi.org/10.1162/105474603765879549
Mayne R, Green H (2020) Virtual reality for teaching and learning in crime scene investigation. Sci Justice 60(5):466–472. https://doi.org/10.1016/j.scijus.2020.07.006
Meta (2023) Testing and performance analysis, oculus developers. https://developer.oculus.com/documentation/unity/unity-perf/
National Research Council (2009) Strengthening forensic science in the United States: a path forward. The National Academies Press, Washington, DChttps://doi.org/10.17226/12589
Perfetti L, Polari C, Fassi F (2017) Fisheye photogrammetry: tests and methodologies for the survey of narrow spaces. Int Arch Photogramm, Remote Sens Spat Inform Sci-ISPRS Arch 42(23):573–580. https://doi.org/10.5194/ISPRS-ARCHIVES-XLII-2-W3-573-2017
PICS (2019) PICS Working group-guidelines on photography. Tech. rep., PICS, https://fflm.ac.uk/wp-content/uploads/2020/03/PICS-Working-Group-Guidelines-on-Photography-Dr-Will-Anderson-Dec-2019.pdf
Pintore G, Mura C, Ganovelli F et al (2020) State-of-the-art in automatic 3D reconstruction of structured indoor environments. Comput Gr Forum 39(2):667–699. https://doi.org/10.1111/cgf.14021
Raneri D (2018) Enhancing forensic investigation through the use of modern three-dimensional (3D) imaging technologies for crime scene reconstruction. Aust J Forensic Sci 50(6):1–11. https://doi.org/10.1080/00450618.2018.1424245
Reichherzer C, Cunningham A, Walsh J et al (2018) Narrative and spatial memory for jury viewings in a reconstructed virtual environment. IEEE Trans Visual Comput Gr 24(11):2917–2926. https://doi.org/10.1109/TVCG.2018.2868569
Reichherzer C, Cunningham A, Coleman T, et al (2021) Bringing the jury to the scene of the crime: memory and decision-making in a simulated crime scene. In: Proceedings of the 2021 CHI conference on human factors in computing systems. ACM, pp 1–12, https://doi.org/10.1145/3411764.3445464, https://dl.acm.org/doi/10.1145/3411764.3445464
Reichherzer C, Cunningham A, Barr J, et al (2022) Supporting jury understanding of expert evidence in a virtual environment. Proceedings–2022 IEEE conference on virtual reality and 3D user interfaces, VR 2022 pp 615–624. https://doi.org/10.1109/VR51125.2022.00082
Remondino F, Guarnieri A, Vettore A (2005) 3d modeling of close-range objects: photogrammetry or laser scanning? In: Proceedings Volume 5665, Videometrics VIII, pp 216–225, https://doi.org/10.1117/12.586294
Rinaldi V, Nic Daeid N, Yu S, et al (2021) Udgård 2021 Raw Dataset. Tech. rep., University of Dundee, https://doi.org/10.15132/10000174, https://www.doi.org/10.15132/10000174
Robinson EM, Richards GB (2010) Crime scene photography. Crime Scene Photogr. https://doi.org/10.1016/C2009-0-61082-5
Ruddle RA, Volkova E, Bülthoff HH (2011) Walking improves your cognitive map in environments that are large-scale and large in extent. ACM Transact Comput-Human Interact 18(2):1–20. https://doi.org/10.1145/1970378.1970384
Sansoni G, Cattaneo C, Trebeschi M et al (2011) Scene-of-crime analysis by a 3-dimensional optical digitizer a useful perspective for forensic science. Am J Forensic Med Pathol 32(3):280–286. https://doi.org/10.1097/PAF.0b013e318221b880
Schuemie MJ, van der Straaten P, Krijn M et al (2001) Research on presence in virtual reality: a survey. CyberPsychol Behav 4(2):183–201. https://doi.org/10.1089/109493101300117884
Sevcik J, Adamek M, Mach V (2022) Crime scene testimony in virtual reality applicability assessment. In: 2022 26th international conference on circuits, systems, communications and computers (CSCC). IEEE, pp 6–10, https://doi.org/10.1109/CSCC55931.2022.00010, https://ieeexplore.ieee.org/document/10017721/
Sheppard K, Fieldhouse SJ, Cassella JP (2020) Experiences of evidence presentation in court: an insight into the practice of crime scene examiners in England, Wales and Australia. Egypt J Forensic Sci. https://doi.org/10.1186/S41935-020-00184-5
Sieberth T, Seckiner D (2023) Identification parade in immersive virtual reality-a technical setup. Forensic Sci Int. https://doi.org/10.1016/J.FORSCIINT.2023.111602
Sieberth T, Dobay A, Affolter R et al (2019) Applying virtual reality in forensics–a virtual scene walkthrough. Forensic Sci Med Pathol 15(1):41–47. https://doi.org/10.1007/s12024-018-0058-8
Sieberth T, Seckiner D, Dobay A et al (2021) The forensic holodeck–recommendations after 8 years of experience for additional equipment to document VR applications. Forensic Sci Int 329:111092. https://doi.org/10.1016/j.forsciint.2021.111092
Slater M, Wilbur S (1997) A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence Teleoper Virtual Environ 6(6):603–616. https://doi.org/10.1162/pres.1997.6.6.603
Streefkerk JW, van Esch-Bussemakers M, Neerincx M (2009) Context-aware team task allocation to support mobile police surveillance. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), pp 88–97, https://doi.org/10.1007/978-3-642-02812-0_11
Tipping R, Farrell G, Farrell V, et al (2014) From collection to courtroom: perceptions and realities of how the data flows. In: Proceedings of the 26th Australian computer-human interaction conference on designing futures: the future of design. ACM, New York, NY, USA, pp 107–110, https://doi.org/10.1145/2686612.2686626,
Tredinnick R, Smith S, Ponto K (2019) A cost-benefit analysis of 3D scanning technology for crime scene investigation. Forensic Sci Int Rep 1(September):100025. https://doi.org/10.1016/j.fsir.2019.100025
Wang J, Li Z, Hu W et al (2019) Virtual reality and integrated crime scene scanning for immersive and heterogeneous crime scene reconstruction. Forensic Sci Int 303:109943. https://doi.org/10.1016/j.forsciint.2019.109943
Yu Sh, Thomson G, Rinaldi V et al (2023) Development of a Dundee Ground Truth imaging protocol for recording indoor crime scenes to facilitate virtual reality reconstruction. Sci Justice 63(2):238–250. https://doi.org/10.1016/j.scijus.2023.01.001
Funding
This research was funded by the Leverhulme Trust RC-2015-011.
Author information
Authors and Affiliations
Contributions
VR contributed to conceptualisation, data curation, formal analysis, investigation, software, visualisation, writing—original draft, and writing—review and editing. KAR contributed to investigation, resources, and supervision. GGS contributed to investigation. NND contributed to funding acquisition, investigation, resources, supervision, and writing—review and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Ethical Approval
The study was ethically approved by the University of Dundee (UOD-DJCAD-2020-0234). All research participants gave their permission to be part of a study and were given pertinent information to make informed consent to participate.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rinaldi, V., Robertson, K.A., Strong, G.G. et al. Examination of fire scene reconstructions using virtual reality to enhance forensic decision-making. A case study in Scotland.. Virtual Reality 28, 57 (2024). https://doi.org/10.1007/s10055-024-00961-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10055-024-00961-w