Community Survey Results Show that Standardisation of Preclinical Imaging Techniques Remains a Challenge

Purpose To support acquisition of accurate, reproducible and high-quality preclinical imaging data, various standardisation resources have been developed over the years. However, it is unclear the impact of those efforts in current preclinical imaging practices. To better understand the status quo in the field of preclinical imaging standardisation, the STANDARD group of the European Society of Molecular Imaging (ESMI) put together a community survey and a forum for discussion at the European Molecular Imaging Meeting (EMIM) 2022. This paper reports on the results from the STANDARD survey and the forum discussions that took place at EMIM2022. Procedures The survey was delivered to the community by the ESMI office and was promoted through the Society channels, email lists and webpages. The survey contained seven sections organised as generic questions and imaging modality-specific questions. The generic questions focused on issues regarding data acquisition, data processing, data storage, publishing and community awareness of international guidelines for animal research. Specific questions on practices in optical imaging, PET, CT, SPECT, MRI and ultrasound were further included. Results Data from the STANDARD survey showed that 47% of survey participants do not have or do not know if they have QC/QA guidelines at their institutes. Additionally, a large variability exists in the ways data are acquired, processed and reported regarding general aspects as well as modality-specific aspects. Moreover, there is limited awareness of the existence of international guidelines on preclinical (imaging) research practices. Conclusions Standardisation of preclinical imaging techniques remains a challenge and hinders the transformative potential of preclinical imaging to augment biomedical research pipelines by serving as an easy vehicle for translation of research findings to the clinic. Data collected in this project show that there is a need to promote and disseminate already available tools to standardise preclinical imaging practices. Supplementary Information The online version contains supplementary material available at 10.1007/s11307-022-01790-6.


Introduction
Major advances in instrumentation and technology over the past decades have enabled the development and rapid expansion of a whole new field in medical imaging; that is the preclinical imaging field. Research activities in this new field using all major modalities, namely, positron emission tomography (PET), computed tomography (CT), magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), ultrasound and optical imaging; have flourished and continue to expand [1][2][3]. Preclinical imaging techniques have been used to observe and understand pathophysiological processes in animal models of human disease [4,5] as well as to test new procedures, methods and probes [6,7]. They have been instrumental to pharmaceutical companies looking to discover and develop new therapeutic interventions [8]. Unfortunately, the generalised acceptance that preclinical imaging does not aim to provide diagnostic information, unlike clinical imaging, and rather On behalf of the "Standardisation in Small Animal Imaging" (STANDARD) Study Group of the European Society for Molecular Imaging (ESMI). aims at methodological developments, has hindered efforts to deliver high quality, reproducible, reliable and translatable information from preclinical imaging studies.
Albeit not focused on directly producing diagnostic information, preclinical imaging uses animal models to understand a disease's underlying biology and pathology. According to the latest European Union (EU) statistics, a large number of animals (10.4 million in 2019 alone in the EU) are used for medical research. With the rapid advent of preclinical imaging [4], many of these would be part of research projects making use of preclinical imaging techniques, which in turn has a proven reduction effect on animal numbers due to repetitive imaging and less need of multiple subgroups to cover a disease development timeline [9]. Importantly, the ethical rule of the 3Rs (reduction, refinement and replacement) exists since 1959 [10] and all reputable funding bodies and scientific journals require adherence to said ethical principles. Yet, it has been recognised that, more often than not, preclinical imaging standards fall shorter than those expected and used in the clinical setting [11,12]. This is an important problem that needs to be understood and addressed by the preclinical imaging community. To that end, the STANDARD group of the European Society for Molecular Imaging (ESMI) has put together a survey directed at the preclinical imaging community with three main aims: (1) to gather knowledge on the current state-of-the-art of preclinical imaging quality control (QC), quality assurance (QA) and standardisation procedures routinely used at different sites; (2) to evoke discussion on current status of preclinical imaging standardisation and what is needed to increase impact of preclinical imaging findings in the translational pipeline towards the clinic; and (3) to initiate a consensus community-led paper on best practice when collecting, analysing and publishing preclinical imaging data. This paper will present the results from the STANDARD survey and the key outcomes from the community discussions that took place during the STANDARD session of EMIM 2022. Furthermore, here we provide structured guidance on already available resources to support best-practice collection, analysis and publication of preclinical imaging data.

Survey Structure and Questions
The STANDARD community survey contained seven sections organised as generic questions and imaging modalityspecific questions.
The generic questions were focussed on understanding current practices regarding data acquisition, processing and management in the field of preclinical imaging. They also aimed to assess community awareness on key available international guidelines for animal research (i.e. ARRIVE guidelines [13,14]), radionuclide imaging (i.e. AQARA guidelines [15]) and data sharing practices (i.e. FAIR guidelines [16]). Finally, the generic questions of the questionnaire aimed to understand status quo in publishing and sharing preclinical imaging data, as well as capturing the community views on how important accreditation practices are for the preclinical imaging field.
Six different preclinical imaging modalities were included in the STANDARD survey. These were optical imaging, PET, CT, SPECT, MRI and ultrasound. Specific questions per modality were organised into six sections of the survey.

Survey Participants
A total of 151 colleagues working in the field of preclinical imaging participated in the STANDARD survey. The majority of those participating in the survey were principal investigators or group leaders (50%) followed by post-doctoral scientists (28%), PhD students (10%), technicians (8%) and service engineers (4%). Geographical distribution of survey participants varied across three continents (Europe, America and Asia) and included participants from Germany (25%); the UK and Belgium (11% per country); the Netherlands (10%); Spain and Italy (7% per country); France and the USA (6% per country); Switzerland and Austria (3% per country); Denmark, Israel and Norway (2% per country); and China, Croatia, Czech Republic, Greece, Japan, Poland and Ukraine (< 1% per country).
Survey participants were required to answer the generic questions, but could skip the modality-specific questions if that was not applicable to them.

Survey Data Collection and Analysis
The survey was delivered to the community by the ESMI office and was promoted through the Society channels, email lists and webpages. The STANDARD group also promoted this survey via social media and professional pages or mailing lists. SurveyMonkey (Momentive, USA) was used as the platform to post questions and collect results. The survey started on the 10 th December 2021 and ended on the 22 nd January 2022. Data was extracted from SurveyMonkey as Excel and PDF extracts. Graph plotting summarising all data collected was conducted using Prism 9.3 (GraphPad Software, USA).

EMIM 2022 STANDARD Session
On the 15 th March 2022, during the EMIM 2022 in Thessaloniki, Greece, the STANDARD study group session was structured as follow: (1) overview of survey aims and presentation of results from the generic questions; (2) presentation of results from each modality-specific questions; and (3) roundtable discussion to gather community feedback on results presented and future directions. Outcomes of this roundtable discussion was collated and will be discussed in this paper.

Results
All responses to all questions in the STANDARD survey are presented in the Supplementary File 1. Personal data and contact information from survey participants were removed from Supplementary File 1 to comply with data protection requirements. A summary of the key findings is provided in the results sections below.

Outcome from Generic Survey Questions
Data from the STANDARD survey showed that 47% of survey participants don't have or don't know if they have QC/ QA guidelines at their institutes (Fig. 1a). However, among those who have QC/QA guidelines at their institutes, the vast majority (69%) keeps records of QC/QA performance of scanners, maintenance, and system failures. When asked about the importance of preclinical imaging standardisation/ accreditation and reporting QC/QA results when publishing preclinical imaging data, survey participants rated this neither essential nor necessary with average score of 2 and 2.7 out of 5, respectively.
With regards to the use of standard operating procedures (SOPs), 68%, 52% and 40% of participants said they used SOPs for acquisition, reconstruction and analysis respectively. A total of 30% of survey participants state they did not use SOPs for any parts of their preclinical imaging studies. When publishing their preclinical imaging data, the majority of the survey participants report acquisition parameters (96%), image analysis methods (92%) and reconstruction parameters (76%). Only a very small percentage (0.8%) reports on none of these parameters/methods.
Various approaches are used for handling and storing preclinical imaging data. The most common data formats, based on survey results, were DICOM (64%), scanner vendor specific format (60%), NIfTI (36%), BIDS (4%) and others (15%), including jpeg and tiff files. Most survey participants store their preclinical imaging data in central network storage facilities (73%) followed by NAS (15%), PACS (10%) or XNAT (5%). The remaining survey participants do not use central storage and instead rely on local hard drive storage. Most survey participants stated they archive their preclinical imaging data (79%), albeit 18% only do that using local hard drives, with only a small minority not archiving imaging data (3%).
The top three most used software packages for preclinical image analysis were ImageJ, Matlab and PMOD; followed by vendor specific software packages; and vivoQuant. AMIDE, Python, FSL and SPM are also popular software packages among preclinical image users.
A total of 46% of survey participants were not aware of the ARRIVE guidelines and 2% did not think said guidelines are useful (Fig. 1b). Among those working with nuclear medicine techniques (PET and SPECT), half was not aware of the AQARA guidelines and a small minority did not think that the AQARA guidelines are useful (Fig. 1c). The vast majority of survey participants was not aware of the FAIR guidelines (Fig. 1d).

Optical Imaging
Nearly 35% of the survey participants declared use of optical imaging instrumentation both for bioluminescence and fluorescence acquisition, with 53% of those indicating that the instruments used at their centres undergo maintenance and are calibrated once a year on average. When reporting quantification of signals from acquired images, 74% of the participants use photon fluxes, although 84% of the participants also consider reporting relative differences between photon fluxes or average radiance as an acceptable method for semiquantitate analysis of optical imaging datasets. Furthermore, 81% of the survey participants also consider it acceptable to use a luminescent standard as a reference for quantitative analysis of optical imaging datasets. Importantly, it should be noted here that photon fluxes and/or counts are strictly dependent on the light detector and the geometry of the instrumentation; therefore, comparison of datasets collected using different imaging instrumentation remain unachievable without the use of standards. Around 90% of participants declare that they are in favour of the adoption of luminescent standards and/or phantoms (preferentially with a price range of €500-1000) as a mean to standardize optical imaging protocols, compare performances of instrumentations and compare datasets.
Some survey questions were designed to inform the development of guidelines for reporting optical imaging experiments in scientific journals. Results from the survey showed that the community ranked highly the need to report the instrument used, the time of acquisition, filters, fields of view, camera aperture, dose of substrates, as well as description of the reconstruction parameters used for 3D analysis. This suggests that the imaging community, represented by the survey participants, felt that without proper description of imaging procedures in scientific articles, research methods and outputs are not reproducible.

PET Imaging
Nearly 50% of the survey participants indicated they use PET. Of those 77% carry out scanner calibrations (quarterly, 26%; biannually, 23% or annually, 28%). However, 3% did not perform regular scanner calibration and 19% responded that they did not know if regular scanner calibration was performed (Fig. 2a). Moreover, the majority of users regularly perform scanner QC (34% daily, 20% weekly and 21% monthly). Yet, 5% report performing no QC and 20% reported that they did not know (Fig. 2b).
Additionally, the majority of participants (57%) perform cross-calibration between dose calibrator, PET scanner and/ or gamma counter, but 16% responded that cross-calibration did not occur and 27% responded that they did not know if it occurred. On the days images are acquired, 62% of participants indicated performing a visual inspection (artefacts), 47% a detector check and 27% performed a co-registration check with CT (if applicable). A total of 7% of participants responded performing none and 24% did not know (Fig. 2c).

SPECT Imaging
Survey results showed that 20% of survey participants are working with preclinical SPECT devices. Most users (88%) are using preclinical SPECT for in vivo imaging experiments, whereby the majority (85%) of the experiments involves quantitative imaging. Ex vivo quantitative SPECT imaging experiments are performed by 31% of the survey participants. To check quantitative accuracy, crosscalibration between dose calibrator and SPECT scanner is performed by 65% of the participants, most frequently every quarter (31%). Cross-calibration also includes SPECT scanner versus a gamma counter in 46% of the SPECT users.
Only 36%, 40%, 44% and 48% of the survey participants uses QC procedures such as photopeak drift, uniformity testing, collimator checking and multimodal registration, respectively. Nonetheless, yearly (manufacturer) maintenance of the SPECT scanner is being performed by 73% of the participants and even more than once a year by 31% of the participants. Only 4% of SPECT users does not perform regular maintenance on their SPECT system.
When asked if survey participants follow any QA procedures, 63% of the respondents either do not follow any procedure (32%) or do not know/never heard of it (32%) (Fig. 4a). The yearly frequency of scanner-maintenance performed by the manufacturer is in most of the laboratories once a year (35%), but a fairly big percentage (19%) of users declare that no regular maintenance is performed while 11% do it more than once a year. Furthermore, 18% of the survey participants responding that they do not know the frequency of regular scanner maintenance visits by the scanner manufacturer (Fig. 4b).
Participants found most of the suggested parameters important to report in a publication, i.e. 11 of the 16 parameters were checked by 50% or more participants (Fig. 5a). Feedback on available reconstruction software features indicated substantial room for improvement. While the majority of systems had DICOM output, HUcalibration, and gated reconstruction, most systems do not provide geometry, ring, metal or stitching artefact correction (Fig. 5b).

Ultrasound Imaging
Survey results showed that only 13% of users were working with preclinical ultrasound devices with only 63% of these users using ultrasound scanners specifically designed for small animal imaging applications. Half of the ultrasound users stated that their scanners underwent annual scanner maintenance, while 44% of remaining users either did not know if the scanner underwent regular maintenance or the scanner had no regular maintenance. With respect to routine maintenance checks, over 75% of users routinely checked transducer cables and housing, 69% checked for transducer cracks and discolorations, while 50% routinely checked the images to detect dead transducer elements within the probes. Twenty-five percent of users did not know if any routine checks were performed on the scanner.

Discussion
This survey clearly showed that there is nescience regarding standardization and guidelines in preclinical imaging research. Even though the ARRIVE guidelines have existed for more than 10 years [14], 47% of survey participants were unaware of them. In addition, guidelines on reporting preclinical imaging experiments [17] and small animal imaging quality control [18] have been available for many years (Table 1) but have not reached the right audience. For some modalities like optical imaging, other hurdles such as limited access or availability of phantoms, which are now emerging [19], further contribute to the lack of standardization [20]. Additionally, variability in preclinical imaging outcomes due to variations on acquisition protocols might still be observed even when a particular vendor dominates the market (e.g. preclinical MRI [21]).
The ESMI standardization group was founded to tackle the awareness of standardization in preclinical imaging, which resulted in the publication of reviews [20] and scanner specific standardization studies, including phantoms and animals [11,22]. Nevertheless, we have not reached the goalwhich is the necessity of implementing standardization in preclinical imaging to obtain valid and reproducible images and data. Following from the data collected via the STAND-ARD survey and discussion at the EMIM 2022, it appears that this gap between available guidelines and the execution of these guidelines is related to the following aspects:

• Lack of Communication
The preclinical imaging field is a multidisciplinary field where biologists, chemists, physicists, pharmacists, physicians, veterinarians, bioengineers, computer scientists and many others work together. Depending on the training background of the people in charge of the imaging scanners, some guidelines will not be brought to their attention. For example, a physicist might be aware of the AQARA Fig. 5. CT survey participants' responses identify several important experimental descriptors that should be used when reporting CT imaging data in scientific publications (a). Furthermore, results showed that many CT imaging systems are supplied with a basic reconstruction software without ring, metal, stitching, and geometry artefact correction and without an iterative reconstruction software (b).

3
and QA/QC guideline but not of the ARRIVE guidelines, especially if coming from clinical imaging. Therefore, the preclinical imaging community would benefit from knowledge exchange between the different disciplines to promote the existing guidelines.

• Institutional Cost-Benefit
The cost-benefit analysis in standardization includes the number of person-months (PM) spent on scanner QA/ QC, study protocol preparation (including the ARRIVE or PREPARE guidelines [13,23]), image analysis and archiving. Standardization will not be performed if it is not clear that the benefit outweighs the costs. On the other hand, the benefits of embedding standardisation into preclinical imaging routines are reproducible and valid measurements and results. This would save an enormous amount of time (and money) by building on valid existing and published findings without duplicating studies.

• Community Standards and Regulatory Requirements
In clinical imaging, the external demand concerning standardization comes from regulatory aspects (e.g. radiation protection), funding agencies and publishers, and community-lead initiatives (e.g. EARL [24][25][26]).
In preclinical imaging, the external demand concerning standardization was just recently started by some key journal publishers, who explicitly require the ARRIVE guidelines to be followed before submitting a scientific paper. Recently, following on from this survey results, expert panels composed of some members of the ESMI STAND-ARD group and the Physics Committee of the European Association of Nuclear Medicine (EANM) have initiated a collaboration to produce joint EANM-ESMI procedure guidelines for implementing an efficient preclinical PET and SPECT QC programme. We expect these guidelines will be published soon and anticipate that they will pave the way for standardisation of preclinical imaging modalities by fomenting other initiatives by expert panels in different preclinical imaging modalities.

• Translational Hiatus
Although preclinical imaging techniques like those covered in this paper (PET, SPECT, CT, US, MRI and optical imaging) are broadly translatable to the clinic, the requirements and recommendations to enhance reproducibility in a given clinical imaging protocol (e.g. MR neuroimaging, [27]), might not be directly translated to the preclinical environment due to additional cofounding factors such as animal handling and anaesthesia [28].

Conclusions
Various resources are available to support efforts towards standardisation of preclinical imaging, many of which have been developed by members of the STANDARD team almost 10 years ago. Despite availability of said resources, the recently conducted STANDARD survey shows that standardisation of preclinical imaging techniques, wide implementation and use of QC/QA programmes and overall understanding of key guidelines in preclinical research (e.g. ARRIVE, FAIR) remain a challenge for the community. Important barriers to delivering standardisation efforts have been identified and wider dissemination of available tools alongside continued education of the community are needed to fully deliver on the preclinical standardisation promise.

Acknowledgements
The STANDARD group thanks all survey participants for their contribution to this study as well as the preclinical imaging community for their active participation and exciting discussions during EMIM2022.
Author Contribution AAST, CK and MH contributed to the conception of the work. AAST, LM, WM, MB, CV, MA, GDI, FG, CM, GW and MH contributed to the data acquisition, analysis and interpretation of the results. All authors contributed with drafting the work or revising it critically for important intellectual content. All authors have approved the final version to be published and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Data Availability All data generated or analysed during this study are included in this published article (and its supplementary information files).

Declarations
Conflict of Interest FG is owner of Gremse-IT GmbH. The other authors have no conflicts of interest to disclosure.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.