1 Clinical Perspective on Radiation Exposure and Radiological Scenarios

Radiological and nuclear incidents with people exposed to a health-threatening radiation dose are rare. However, due to the manifold use of radionuclides and radiation sources in military, industry and medicine [1], there is a wide variety of realistic scenarios, which can lead to a radiation accident [2]. Radiological accidents mostly involve external irradiation, which leads to very heterogonous and/or localized radiation exposure (a collection of reports on radiological and nuclear accidents is provided by the IAEA) [3]. Victims exposed to higher radiation doses in these accidents (leading to deterministic effects), sooner or later, will meet health-care providers, who have to figure out to what extent medical treatment is necessary. Additionally, if a large number of people are involved in a radiological incident, fast exclusion (e.g., by triage) of so-called “worried-well” will strongly relieve the clinical personnel and save valuable resources needed for exposed patients. In most cases, there will be only a small fraction of patients in need of medical treatment (e.g., 20 of 112,000 monitored people in Goiânia [4]). But this fraction will bind a substantial amount of medical resources, e.g., antibiotics to compensate immune suppression, cytokines for support of the stunted production of new granulocytes, blood transfusions or even bone marrow transplantation. For treatment of local radiation burns, skin grafts, injection of stem cells or even amputation has to be considered [5,6,7]. Diagnosing patients fast (< 3 days) and thus providing them optimal treatment can, e.g., nearly double the lethal dose (LD50/60) from 4 Gy to about 8 Gy [8].

For fast screening, clinical signs or symptoms of the acute radiation syndrome (ARS) can be a first method to categorize people into groups with zero, low or high exposure. Here, e.g., the METROPOL manual provides guidance for physicians to correlate symptoms or blood counts and their time delay to the radiation exposure with the respective potential course of hematopoietic ARS [9]. Even if nowadays this can be assisted by a number of software tools, like WinFRAT [10], and the H-Module [11], there are a couple of pitfalls, which can cause uncertainties. (1) Scoring strongly depends on the time between exposure and symptoms. However, there can be several scenarios, in which the exact time of exposure is not known, e.g., because those patients cannot remember or did not notice the exposure (probable in most insidious malevolent acts). (2) Symptoms such as nausea, vomiting or erythema are vague and not specific just to radiation. They can easily be induced psychologically, which could likely happen in a mass casualty, when many people are involved and believe being exposed. (3) The symptoms are connected to reactions of certain organs to an absorbed dose. Thus, certain aspects of the exposure might not be considered, but require treatment, e.g., if exposure is locally restricted and does not induce a general symptomatic response.

As a consequence, additional and preferably neutral information about the exposure is often required, at least for verification. In fact, considering chronic radiation effects, their relation with whole body dose has been examined in extensive cohort studies, and the International Commission on Radiological Protection (ICRP) could establish models for risk estimation of chronic health effects (e.g., leukemia, thyroid cancer) [12,13,14]. Unfortunately, such models for acute health effects in response to radiation exposure are lacking, including ARS characterized by the hematological, gastrointestinal, dermatological or neurological syndromes. Actually, recent examinations on the association of dose with ARS suggest a limitation of dose for ARS severity prediction [15, 16]. For instance, single whole body exposures < 1 Gy roughly corresponds to a mild or no H-ARS 0–1 degree not requiring hospitalization and > 5 Gy corresponds to a severe H-ARS 3–4 degree urgently requiring hospitalization and intensive treatment. The dose range of 1–5 Gy leads to a variety of H-ARS severity degrees (H-ARS 1–3). This makes defined recommendations for individual treatment recommendations challenging in this dose range. Here, further differences in the characteristic of exposure (e.g., heterogeneity) as well as inter-individual differences in biological peculiarities (e.g., intrinsic radiosensitivity) represent further variables with strong impact on the clinical outcome [15] (Fig. 1).

Fig. 1
figure 1

Both, physical characteristics of radiation exposure and biological peculiarities of the individual determine radiation-induced acute health effects. Many factors associated with radiation exposure and biology contribute to the clinical outcome. Bioindicators of effect might integrate these different factors to provide a final assessment. However, such indicators are under research and not a validated tool ready to use for the medical management of radiation victims. ARS: acute radiation syndrome

Assessment of a suitable bioindicator, which might also be more intelligible for physicians, would allow an improved clinical prediction of the acute radiation syndrome (ARS) and its course [17,18,19]. Swartz and colleagues discussed in detail how biomarkers of organ-specific injury could be integrated into an early triage system taking into account the current capabilities of physical dosimetry, i.e., EPR dosimetry [20]. They also addressed the argument that the patient`s biological response and not the radiation dose received should be considered for the initial triage [21, 22].

However, such biological indicators are still under research and not fully established. Thus, a whole body single dose for exposure of 2 Gy holds promises to identify those individuals expected to exhibit an ARS and needing medical intervention [23,24,25]. As personal dosimeters reporting about the accidental exposure will mostly be an exception for a radiological emergency, several methods were established to allow such a dose estimation from the individual patient, which are often described as retrospective dosimetry.

Regarding the central questions and challenges for medical management, the optimal retrospective dosimetry method, similar to a personal dosimeter, should be fast, can be used for triage, cover the relevant dose ranges as well as exhibit a precise, reliable, persistent and radiation-specific indicator.

In this manuscript, we present methods for biological and specifically EPR retrospective dosimetry with regard to radiation accident response. Additionally, we discuss how these methods resemble or complement each other to match the requirements to support medical management in a clinical perspective.

2 Retrospective Dosimetry

The retrospective dosimetry techniques comprise biological (cytogenetic and molecular techniques) as well as physical dosimetry, such as electron paramagnetic resonance (EPR) spectroscopy, luminescence dosimetry (thermo- and optically stimulated luminescence), and neutron activation [26]. The ICRU report 94 [26] provides a comprehensive overview of the biological and physical dosimetry methods considering their suitability for the early-phase (hours up to 3 days) assessment of individual radiation doses after acute ionizing radiation exposure and their use in the past. The report additionally covers basic aspects of retrospective dosimetry, such as dose quantities and calibration processes including discussion of the detection limits. Depending on their characteristics, the various dosimetry methods are differently suitable for various applications, such as initial emergency response (small or large scale), dosimetry for epidemiological studies (population monitoring) or retrospective dosimetry to reconstruct a suspected dose weeks to years after exposure for a single or few individuals.

For completeness, neutron activation is shortly mentioned as the only method specific for neutron exposure, e.g., in criticality accidents such as in Tokaimura 1997 [27] and Sarov 1999 [28]. Here, the production of detectable radionuclides by nuclear reactions of neutrons in human blood, nails and hair can be used for the estimation of the neutron dose [26].

We will focus especially on biological and EPR dosimetry methods for the more common low-LET dose component of an external radiation exposure to support physicians and health-care providers in the potential treatment of an ARS.

2.1 Biological Dosimetry

The most frequently used cytogenetic biodosimetry techniques in case of a radiation accident are the dicentric chromosome analysis (DCA), the cytokinesis-blocked micronucleus (CBMN) assay and analysis of reciprocal translocations by FISH (fluorescence in situ hybridization). The PCC method (premature chromosome condensation), especially suitable in the high dose range up to 20 Gy, is established only in few laboratories and not yet validated as other cytogenetic methods [29]. DCA and CBMN assay have been established as the main biodosimetry methods for an acute ionizing radiation exposure, as they combine high (DCA) or reasonable (CBMN) specificity, a lower detection limit of 0.1 Gy (DCA full mode)/0.5 Gy (DCA triage mode) or 0.3 Gy (CBMN) and persistence of the signal for several months [30,31,32]. PCC (fusion method and ring assay) was established to overcome the 48–70 h culture time essential for DCA and CBMN [33]. The chromosomes are induced to condense prematurely before the first mitosis, which eliminates the culture time and the opportunity for mitotic delay or death to occur. The PCC method can provide dose estimates within hours up to about 20 Gy, identifies inhomogeneous exposures as otherwise only DCA, and has a lower detection limit of 0.2 Gy [29]. The FISH translocation analysis represents the method of choice after external protracted as well as chronic exposures. Cells containing reciprocal translocations (exchange of chromosomal segments between two chromosomes) are viable and allow the identification of aberrations originating from proliferating stem cells after appearing in peripheral blood lymphocytes even decades after exposure. However, FISH translocation analysis is not suitable as an emergency dosimetry method, as a triage mode is not established and the aberration scoring in full mode is too time intensive. A disadvantage of the metaphase-based cytogenetic approaches is the necessity of proliferating cells. Mitogenic stimulation means lymphocytes have to be cultured for at least 48 h and thus a loss of time until reporting dose estimates. However, especially the medical management of large-scale radiation scenarios is a challenging job with regard to sample processing and provision of dose estimates. In such situation, speed of delivering results and sample throughput are more important than the ultimate accuracy of dose estimates. With the aim of increasing capacity of biological dosimetry, different strategies such as a triage-scoring mode, Web-based scoring especially for DCA [34,35,36], or automation are pursued. One approach is networking of national or even international laboratories. In recent years, several international networks for biological dosimetry were established in Europe, Asia, Latin American, Canada and the USA [37]. Among the most established biological dosimetry tools, there are techniques with the capability of a high throughput of samples due to high level of automation (DCA, CBMN, γ-H2AX) [38,39,40]; however a point-of-care device has not been developed up to now. ISO standards for the biological dosimetry methods DCA, CBMN and FISH-based translocation analysis contribute to a high reliability and reproducibility of these methods [30,31,32, 41].

Compared to the cytogenetic methods, the molecular techniques, γ-H2AX DNA foci assay and gene expression analysis are promising triage tools, as they are independent of mitosis and are able to assess absorbed radiation doses about ten times faster than cytogenetic methods [42]. Thus, in particular, the molecular approaches have the potential to become useful triage tools in mass casualty events due to their speed combined with their high-throughput capacity, potential of automation, and capability for point-of-care diagnosis. However, they seem to lack specificity and confounders are less examined so far [16, 43, 44] suggesting a combination of different techniques for improved prediction of ARS. Additionally, these methods have to be applied shortly after a radiation exposure (24 h for γ-H2AX and days for gene expression). Nevertheless, these characteristics (early, high-throughput and point-of-care diagnostic) are urgently required for a valuable emergency dosimetry method.

Recent progress was achieved by adapting biological dosimetry methods, i.e., DCA, CBMN and γ-H2AX foci analysis, to the imaging cytometry method, combining the high throughput of flow cytometry with the sensitivity of aberration/foci scoring by microscopy [45,46,47]. Biological dosimetry methods proved to be very valuable and helpful in several past radiological accidents. The applied methods have to be chosen depending on their applicability in a certain radiation scenario (number of people at risk of exposure, time elapsed since the exposure, assumed temporal dose distribution, level and heterogeneity of dose), and further methodical properties (Table S1). The IAEA manual [29] gives a comprehensive overview of the different cytogenetic techniques and presents several examples of their application in real scenarios. Additionally, a guideline on the use of biodosimetric tools in radiation emergencies has been issued as part of the MULTIBIODOSE project funded by the European Commission [48, 49].

However, despite the ongoing research and progress made in the field of biological dosimetry, up to now, the gold standard biodosimetry method is still the DCA in peripheral blood lymphocytes.

2.2 EPR Dosimetry

A detailed overview on the application of EPR for retrospective dosimetry is also given in the ICRU report 94 [26] and ICRU report 68 [50], which are used as the main references for the following introduction to EPR. Electron paramagnetic resonance spectroscopy enables quantifying the concentration of stable radicals in mater. It is based on the resonant behavior of their unpaired electrons in a microwave field in combination with a static magnetic field. As radicals are also produced by ionizing radiation, this technique gives the opportunity to estimate the absorbed dose in various materials. The main requirement is the long-term stability of the radicals in the material. Typically, EPR is performed in three ranges of the microwave spectrum: L-band (≈ 1 GHz), X-band (≈ 10 GHz) and Q-band (≈ 34 GHz). The necessary static magnetic field correspondingly increases from 100 mT, 300 mT to 1 T for the typical EPR signals used for dosimetry at a g-factor (or dimensionless magnetic moment) of about 2, which enhances the requirements on the used magnets. In parallel, the cavity size decreases with increasing microwave frequency from several centimeters down to few millimeters, which limits the sample size.

A collection of properties for L-, X- and Q-band EPR is presented in Table 1.

Table 1 Basic information on resonator types for EPR dosimetry and typical properties. Main data from [26, 50,51,52]

The lower frequencies in the L-band have a smaller heat transfer to water and can be applied to “larger” volumes in the several cm range, which is potentially good for in vivo measurements. Its weak spectral resolution is often not able to discriminate between radiation-induced signals (RIS), mechanical-induced signals (MIS) or background signal (BGS). Here, laborious mathematical procedures often have to be used to recalculate the pure RIS component. So far, the first established use of the higher frequencies in the X-band became the “gold standard”, because their use results in a better signal resolution and, thus, radiosensitivity normalized to the sample mass. However, it already generates discomfort when applied in vivo to patients due to the also increased heat generated in water-containing tissue (resonance frequency of water molecules at ≈ 22 GHz). The heat induction could be reduced by using pulsed EPR, which has not been applied to living humans yet and still has disadvantages of short penetration depths and strong dependence on water portions in the tissue at higher frequencies [53]. Ex vivo, a minimum sample mass of about 50–200 mg is needed for a sufficient radiation-induced EPR signal and the size is limited to < 1 cm. Use of the Q-band with the highest frequency increases the signal resolution further and, thus, allows separation of almost all signals in the spectra. In parallel, it allows further reduction of the required sample mass to a few milligrams, while assuring the same signal to noise ratio, which makes sample extraction potentially minimally invasive. However, signal reproducibility can be altered by uncertainties in sample positioning within the EPR cavity, a problem which can be addressed by adding an internal marker. Depending on the sample type, grinding or drilling is often used for sample collection or preparation (minimization of orientation influence for crystalline structures). These procedures induce mechanical-induced signals (MIS), which overlap for the X- and L- band with RIS. Thus, especially for the L-band in vivo measurements are of great interest, because sample collection and preparation are not necessary.

EPR is a non-destructive analysis method. Thus, it can be applied several times to the same specimen. This allows establishment of so-called additive dose calibration curves on the same specimen by exposure to known doses and subsequently measuring the EPR signal after each dose step. The so determined EPR signal change per dose allows to estimate the dose of the initial EPR signal of the unknown dose. In this way, inter-specimen and inter-individual influences can be fully neglected. Of course, this approach is more time consuming, as the EPR signal often has to stabilize for hours or even days after each irradiation. Due to the additional irradiations, additive calibration is just applicable for ex vivo measurements.

In the 1960s, the potential of EPR for individual dose assessment was recognized [54]. Since then, many materials were found to be suitable for dose estimation, e.g., mineral glass or sugars. In this work, we focus on the most promising human-derived specimen, which are teeth (especially enamel), bone and nails. In a radiological incident scenario, this enables to estimate the local dose to different body regions of a patient. Thus, dose heterogeneity can be judged, which can affect the medical treatment, e.g., how strong was the gastro ntestinal part irradiated, how far a body extremity has to be amputated or can a skin graft be successful?

However, these specimen materials have several complementary advantages and drawbacks.

2.2.1 Ex vivo EPR

2.2.1.1 Tooth

Teeth have the strong advantage that they consist of about 97% hydroxyapatite and nearly no soft tissue, which strongly alters the EPR signal due to its water content. Radicals (especially CO2) produced in this nearly ideal solid state detector are stable for about 107 years. Thus, tooth enamel is the most radiosensitive specimen for EPR. Additionally it is not rebuilt over time, because teeth formation is completed already before adolescence. Thus, radicals measured in teeth are a good estimate for lifetime exposure. With a minimal detection dose (MDD) of about 30–100 mGy for ex vivo X-band application, tooth enamel is the most radiosensitive human-derived sample type that has been already used in several inter-laboratory comparisons [55, 56] and has standardized protocols by the ISO 13304-1:2020 [57] and ISO 13304-2:2020 [58]. Using the Q-band should result in similar radiosensitivity in smaller enamel mass (few mg, which means a size of about 3 mm and less intervention than a typical cavity preparation for fillings) [52, 59, 60]. Although Q-band EPR displays an improvement of sensitivity (20 times higher) and signal resolution compared to X-band, the method shows some practical limitations regarding equipment costs and high sensibility to sample positioning (requiring longer training than X-band) as shown, e.g., in fossil teeth [61]. The high content of hydroxyapatite in teeth also reduces inter-individual signal variances and additionally allows for the application of general calibration curves for dose estimation, which is significantly faster compared to the additive calibration method. This applies for high-energy photons, but for low-energy photons dose additive methods are recommended. The use of different located teeth enables indicating from which side the radiation occurred. The time of the actual EPR measurement is below 15 min [52]. One drawback of tooth enamel is the need of dental intervention for recovery and that its isolation needs further sample preparation steps, which can induce MIS (drilling/sawing without water cooling). However, these MIS can be counteracted by, e.g., an additional acid etching step or low speed drill under cooled water conditions [51]. For epidemiological studies on A-bomb survivors [62, 63] or Chernobyl victims [64], it was accepted by patients that teeth were collected for dose determination, which had to be extracted due to dental treatment anyway. This collection took many years. However, extraction of teeth will not be appropriate for screening or triage in a large- or even small-scale scenario, in which most people will not be exposed to a health-relevant radiation dose. If intervention is smaller, e.g., a biopsy of just a few milligrams of enamel using the Q-band, acceptance could be higher for dose screenings. Additionally for heterogeneous exposure, dose to tooth enamel will only allow a limited estimate for whole body dose or dose of critical organs due to their isolated location. A possible confounder for tooth dosimetry is that the enamel suffers from background signal produced by UV exposure. Thus, molars are the preferred teeth used for EPR. Nevertheless, also dental treatment has to be considered as source of UV light, which also could lead to additional background signal as well as X-ray exposure due to medical imaging. Typically, 48 h are needed until the RIS is stabilized. For emergency cases, the time can be shortened if RIS formation is characterized well enough at these time points. For real radiological accidents, ex vivo X-band EPR was only rarely applied as, e.g., after the Nueva Aldea/Chile accident in 2005 [65] due to the invasiveness of tooth extraction from living patients. In the mentioned case, teeth were extracted due to medical reasons. Ex vivo Q-band EPR on small samples of tooth enamel, originally proposed by Romanyukha et al. [66], already provided important information about the circumstances of the exposure (direction of radiation, local dose and heterogeneity of body dose) in two radiological accidents, i.e., the Stamboliskyski/Bulgaria accident in 2011 [67] and the Chilca/Peru accident in 2012 [68].

Apart from its usage for retrospective dosimetry in acute radiation scenarios, EPR dose estimates are collected and included in a wide range of radiological studies. These studies include the nuclear bomb detonation in Hiroshima and Nagasaki [62,63,64], nuclear power plant accidents, e.g., in Chernobyl [69] or Fukushima [70, 71], radioactive pollution caused by the Mayak plutonium facility [72] as well as Mayak workers [73, 74]. Additionally, EPR data have been collected after incorporation of radionuclides leading to an internal dose component, which can also be detected, e.g., by teeth enamel. These studies examined the intake of 239,240Pu, 137Cs and 98,90Sr by the populations living around the Semipalatinsk test site [75,76,77] as well as the intake of 89,90Sr and 137Cs by the population living near the Techa River [78, 79]. However, EPR was shown to be most applicable when radionuclides are distributed homogenously in the body [80].

2.2.1.2 Bone

Of course, teeth are only located in the head region. Thus, statements about dose exposures of body parts further apart are limited. Here, the analysis of bone biopsies could provide further information, because biopsies can be done at the region of interest, especially if just a few milligrams are needed. On the other hand, biopsies are rather a major intervention and appropriate if there is a strong evidence for radiation exposure and critical need for dose information regarding further medical treatment. Due to the lower content (50% by volume) and more variant crystalline structure of hydroxyapatite in bone, the RIS for EPR in bone is less sensitive (estimated minimal detection dose of 5 Gy for X-band). Additionally, bone has a larger content of organic tissue, which implies a laborious sample preparation step for its removal before application of EPR. Here, for Q-band analysis the preparation can be simplified, as signals from organic tissue can be separated from the RIS. The variance of bone composition and density leads to additional uncertainties in the recalculation from dose absorbed in bone to dose in surrounding tissue, which also changes for different radiation qualities. This has to be considered carefully. In contrast to teeth, bone is constantly rebuilt. Thus, the concentration of radicals after an exposure to ionizing radiation decreases with time. Here, two effects contribute as a consequence of living bone: radical recombination, which happens in hours and bone remodeling in a time window of months. For immediate emergency management, in which a fast dose assessment is the goal, the effect of bone reformation should be negligible, but it can be a problem if the exposure is longer. However, it has to be considered that bone reformation will be altered for high local doses for which bone EPR analysis typically has been applied [7, 67, 68, 81].

As with tooth enamel ex vivo EPR on bone specimen has been applied mainly in small-scale accidents characterized by heterogonous whole body and/or localized exposures. In these cases, the dose estimates have been very helpful for medical treatment planning, as in one case the dose estimates confirmed that a further amputation of a finger was not indicated [7, 67, 68, 81]. Although ex vivo bone EPR has been used widely and already for a long time, many questions are open concerning the behavior of bone material upon irradiation and no standardization of protocols and dose assessment using EPR bone dosimetry has been addressed up to now.

2.2.1.3 Nail

Regarding accessibility and availability, finger and toenails are promising specimen for early retrospective dosimetry and triage by EPR spectroscopy, but methodological aspects (e.g., interindividual variance or sensitivity) challenge this dosimetric approach. Due to their location, they should provide the best information about dose heterogeneity. Due to their location, they should provide the best information about dose heterogeneity. Additionally, they are easy to access for clipping and potential in vivo measurement. Their composition is similar to soft tissue and, thus, the absorbed dose in nails similar to the absorbed dose in surrounding tissue. But in contrast to the great opportunity the use of nail provides especially for fast triage [82], RIS generated in the keratin component is much less stable and sensitive. Especially, water content in nails plays a critical role, because humidity during storing of nail clipping leads to a significant decrease of signal as well as handwashing [83]. In additional to the normal remodeling of nail, signal fading leads to further loss of signal, thus measurements on nails should be done shortly after irradiation. As expected, UV exposure also generates interfering signals [84]. Although 1 Gy dose estimations were presented [85, 86], so far ex vivo nail clippings seem just to deliver suitable dose assessments above 10 Gy, if BGS is unknown and time after irradiation is not limited.

Ex vivo nail EPR dosimetry was also used in some small-scale scenarios with relatively high doses (> 10 Gy) [68, 87, 88]. In the Chilca/Peru accident [68], EPR on fingernails provided important information on the exposure scenario, such as dose distribution over the body and localized high exposure to the hands, and directed the medical treatment planning [67]. Romanyukha and colleagues compared X- and Q-band EPR on fingernail samples collected 2 months after irradiation. They obtained similar results, which were about 50% lower than the clinically based estimate probably due to different methodical reasons [88]. Moreover, ex vivo nail EPR dosimetry performed at Naval Dosimetry Center has demonstrated the ability to assess the radiation dose using a small, portable EPR spectrometer which significantly reduces the time needed for dose estimation to the range of minutes. [88]. Thus, in the future this will be an advantage for the establishment of a suitable point-of-care application of ex vivo nail dosimetry to support early phase diagnosis of radiation victims on-site. However, unsolved methodological challenges as already mentioned represent prerequisites to be solved before establishing a meaningful nail dosimetry.

2.2.2 In vivo EPR

The possible direct in vivo approach of EPR spectroscopy is a unique selling point of EPR dosimetry, but is not established yet or goes along with certain performance losses regarding the minimal detection dose. However, a tradeoff is that an in vivo approach can avoid the uncertainties associated with sample collection and preparation, e.g., mechanically induced signal (MIS) or humidity effects in nail clippings. Additionally, the establishment of calibration curves is challenging, because significant doses are needed at the point of interest and thus only total body irradiated cancer patients can be used, which are rare. Phantoms, which, e.g., were used for investigation of influences on measurements in in vivo setups [89, 90], are also no option, because they are not standardized and validated yet. Nevertheless, some work on in vivo EPR measurement already exists.

2.2.2.1 Tooth

In vivo tooth EPR in the X- or L-band has been tested only by a few laboratories and shows several drawbacks, including a relatively high minimum detectable dose of about 2 Gy [91, 92]. Zdravkova and colleagues [93] compared L-Band in vivo and in vitro EPR on rat teeth irradiated with 50 Gy by X-rays. They found that L-band spectra did not show a significant difference between in vivo and in vitro measurements, but observed the tendency of a higher signal after in vitro measurement [93]. The in vivo tooth EPR approach has not been used in real accidents up to now. However, performing in vivo L-band measurements on volunteers in the Fukushima prefecture [70] showed that, even if no case with detectable dose was measured, a portable EPR spectrometer can be used on-site for possible triage after a radiological event.

2.2.2.2 Bone

Zdravkova and colleagues compared the in vivo approach of bone EPR dosimetry of human and baboon fingers to human dry phalanxes and determined a lower detection limit of 60 Gy with the assessment that with further development a detection limit of 40 Gy could be achievable [87]. The threshold dose for osteoradionecrosis after fractionated exposure over weeks in radiotherapy patients has been found to be about 60 Gy [94], but is lower (about 40 Gy) after an acute high radiation dose [26]. However, this approach requires further knowledge regarding response/behavior of bone upon irradiation and influence on radiation-induced EPR signal as well as technical development until serving as a reliable dosimetry tool.

2.2.2.3 Nail

The in vivo approach of nail EPR dosimetry has also evolved into a promising candidate as an initial phase triage tool in a large-scale event. With the development of a first-generation resonator for in vivo nail dosimetry, important progress has been made to apply the technique under realistic conditions on-site and a multimethod concept to implement in vivo nail EPR in the future into the medical management is discussed [20, 95]. Swartz and colleagues thereby assume an improved dose resolution of 1 Gy which could be achieved in the future using in vivo nail dosimetry [20]. It hypothetically describes the capability of how in vivo nail EPR can be used to assess the homo- or heterogeneity of dose after exposure of an individual and how in case of dose heterogeneity the dose distribution can be assessed by this approach. They conclude that in vivo EPR has to be considered complementary to the use of biological and clinical dosimetry in the most effective response to a large-scale radiation scenario [20]. However, up to now, in vivo EPR has not been carried out in real accidents and continuing research, validation and standardization of this approach are recommended.

2.3 Comparison of Biological and EPR Dosimetry

For better comparison, Tables 2 and 3 summarize the basic information of the described retrospective dosimetry methods. A more detailed table (especially for biological dosimetry methods) can be found in the Supplements (Table S1).

Table 2 Basic information for biological dosimetry
Table 3 Basic information for EPR dosimetry

In principle, both retrospective dosimetry methods can provide dose information in the critical range of 0.1–10 Gy and thus can support categorization of patients [29, 55, 56, 91]. Hereby, accuracy is better in the lower dose range (< 5 Gy) for most biological methods, because most biological indicators saturate at higher doses. In contrast, accuracy for EPR dosimetry is better at higher doses (> 5 Gy), because the RIS becomes more dominant and is linear up to 100 Gy or even 1000 Gy [26].

In the early phase of a radiological accident, time is a crucial variable for medical management decision making, which strongly varies for the different techniques, because sample collection, preparation and measurement time differ. Especially, sample collection represents a great advantage of biological dosimetry, because taking blood is a highly common and standard procedure of medical checkups. Additionally, the preparation of blood is highly standardized for most of the analyzed markers (isolation of lymphocytes or RNA) as well as many assays in biological dosimetry, e.g., DCA and CBM. These standardized processes also often bear the possibility of automation or parallelization/multiplexing, meaning that many samples can be processed/analyzed at once, and thus making them suitable for high-throughput diagnosis in the early phase of an accident [39, 96]. In particular, gene expression analysis is a promising screening tool for initial triage dosimetry due to the speed, high sample throughput and capability for point-of-care diagnostics [43, 97, 98].

In future, establishment of standard protocols has to be one main goal for EPR dosimetry, because differences in sample preparation or collection can already lead to differences in dose estimation. For teeth enamel, this was reached for X-band EPR, because there was a broad application in epidemiological studies [63, 64, 69, 72,73,74]. Here, the extraordinary persistency of the RIS (107 a) in enamel makes teeth nearly ideal passive dose detectors.

In vivo EPR omits sample preparation and allows a direct assessment of the absorbed dose, within the measurement time of about 10 min. In combination with a portable spectrometer, which was already applied for in vivo teeth EPR in Fukushima some years afterward, this technique would represent a powerful triage tool for point of care diagnostics. However, a minimal detection dose of 2 Gy [70] would not be sufficient for identification of potential ARS patients requiring early treatment and hospitalization.

Nail EPR ex vivo could also support triage capacity, because when clipping of nails is standardized, its execution should be easy and fast. The low stability or persistency of the RIS is maybe less problematic when measurements are done directly after the accident. Of course, in vivo EPR on nails would be also an extraordinary screening tool. However, solid data on its performance (minimal detection dose) in human and radiological accidents have still to be shown.

Finally, EPR, independent from specimen or in vivo/ex vivo application, would need to reach a minimal detection dose of less than 0.5 Gy to be suitable for triage at 2 Gy (assuming a necessary limit of quantitation of about 2 Gy). So far, this was only achieved by ex vivo EPR on enamel.

2.4 Possible Contribution of Dosimetric Methods to Medical Management of Radiation Accidents

In the past, both biological and physical dosimetry have been applied in parallel or consecutively within the medical management support of the same radiation-exposed victims [99, 100]. Thereby, it is important to consider the site of absorbed dose measured by the different methods. Whereas biological dosimetry measures the dose to circulating blood lymphocytes, and thus provides the absorbed equivalent whole body dose, EPR determines the local dose absorbed by the collected biologically derived specimen or physical sample material located near the victim at time of exposure. This implies that differing dose estimates have to be expected upon dose assessment of heterogeneous exposed individuals by these methods. However, as described in the literature, such complementary dose information has been an advantage and very supportive for medical treatment planning in some past accidents.

In addition, the different challenges and requirements of large-scale and small-scale events must be taken into account. For a small number of subjects the available dosimetry capacity would be able to assess the individual doses with applying complementary methods as accurate as possible and the health-care system will be able to treat all victims. Within large-scale radiological events, a critical component of public health and medical response to a radiological event will be the rapid and effective screening of large populations under probably difficult circumstances to separate the exposed from the non-exposed. Here, the different dosimetric methodologies should be used in parallel to enhance the screening capacity. These dosimetry techniques need to provide a minimal detection limit of ≤ 0.5 Gy to support this decision process of triage (assuming a necessary limit of quantitation of about 2 Gy). So far, biological dosimetry methods and ex vivo EPR on enamel fulfill this criterion.

3 Complementary Application of Biogocial Dosimetry and EPR in Radiation Accidents

The medical management of the Nesvizh/Belarus accident in 1991 [101] is one example showing the great importance of dose assessment for the physician in case of whole body exposure. In a sterilization facility, a worker was accidentally exposed for 90 s to a 28.1 PBq 60Co source. Blood cell counts and EPR dosimetry (clothing material and tooth and nail EPR post-mortem) were carried out. DCA was planned, but cell growth failed matching together with the first estimated high dose based on the reading of a thermoluminescent dosimeter (TLD) at the assumed victim`s position (12–15 Gy) and the estimated dose from blood cell counts (9–11 Gy). EPR on some material from the victim`s clothing at waist level revealed doses ranged from 11 to 18 Gy (± 20%) with the left side being more affected. Post-mortem, ex vivo tooth EPR confirmed such a high dose of about 15 Gy in the head region. Unfortunately, in this high-dose range, no treatment would have been able to save the life of the victim. However, in a lower dose range (< 5 to 8 Gy), where intense medical treatment enables survival and DCA can be performed successfully to estimate the dose heterogeneity, medical treatment could have been started in the first 3 days after exposure.

In a second example, the Chilca/Peru accident in 2012 [68], three workers suffered from a whole body exposure combined with a higher localized exposure. They were accidentally exposed to an 192Ir source; however, the exact timeline and duration of the overexposure could not be reconstructed precisely. The most exposed worker is an impressive example how the combination of biological and physical dosimetry together with the assessment of clinical signs had an impact on the medical treatment of a radiation victim. His OSL dosimeter revealed a high radiation dose of about 7 Gy and an erythema evolved after 3 days on the index finger of the left hand. About 11 days later, biological dosimetry confirmed a heterogeneous exposure of 75% of the whole body to about 2.5–3.5 Gy. Ex vivo nail and bone EPR estimated a very heterogeneous high local dose to the left hand of about 25 Gy (nails) and up to 73 Gy (bone) depending on the site of collected bone biopsy. Findings by biological dosimetry and EPR on mini-biopsies of enamel as well as experimentally on finger nails indicated a worse prognosis and led to an urgent transfer of the worker to a hospital for specialized medical treatment [67]. Moreover, the complementary dosimetry approaches helped to elucidate the circumstances of the scenario, e.g., the volume of the body being exposed to a certain dose level (DCA) and the orientation of the worker within the radiation field (bone EPR on mini-biopsies). Such characteristics are of great importance if a patient is at risk of the development of H-ARS. Additionally, EPR could confirm by estimation of the dose level and distribution on the phalanx of the most exposed worker that no further amputation was necessary [67].

Besides the question of amputation, the complementary use of EPR can also support the question of bone marrow transplantation. Here, it is extremely important to estimate if part of the bone marrow has been spared or was even exposed to lower doses and exhibits residual hematopoiesis allowing hematopoietic recovery. In such cases, a bone transplantation is not indicated due to the risk of graft-versus-host disease, stem cell failure, and organ damage possibly leading to the death of the patient. This lesson was learned from the Chernobyl accident and represents a central aspect of the European consensus on the medical management of ARS [102].

It is worth mentioning that the DCA represents the only highly validated biological dosimetry tool allowing to asses if a presumed whole body exposure has been homogenous or rather not [29]. However, the assessment of dose heterogeneity by EPR would probably still helpful for further treatment.

All in all, in small-scale scenarios, a multi-parametric approach is urgently recommended to exhaust the available options for radiation injury assessment as this strategy has successfully been applied in past scenarios [68, 103]. These options are biological and physical dosimetry methods, physical/mathematical dose reconstruction and clinical evaluation (dose reconstruction and effect prediction based on clinical signs and symptoms) to gain a comprehensive overview concerning the circumstances of the overexposure. In case of only few affected individuals, a close collaboration between the retrospective dosimetry laboratories and the treating physicians is more feasible than can be assumed for mass scenarios.

The special challenge of the complex management of a radiation mass casualty scenario needs emergency planning and preparedness to appropriately triage and treat the identified exposed population. A critical component of the medical response to such an event will be the on-site mass screening of large populations to separate the exposed (requiring intensive and early clinical support) from the non-exposed (avoiding the absorption of limited clinical resources) and to communicate the dosimetric results to the physicians [97, 104]. In case of the European dosimetry network RENEB (Running the European Network of biological dosimetry and physical retrospective dosimetry) [105], the network will be connected to the global emergency and preparedness system. The foundations for this have already been laid by integrating members of the WHO's biodosimetry network (BioDoseNet) [106] and the IAEA's RANET [107]. However, every single step of the procedures from sample collection to dose assessment to transmission of results in a large-scale event where hundreds or thousand people need dose assessment still has to be set up.

4 Conclusion

Until a reliable bioindicator of effect with regard to the risk of developing ARS and for routine use is available, dose information including dose heterogeneity as well as dose distribution is crucial for medical treatment planning. Furthermore, local dose assessment might be indispensable to guide medical treatment in high local irradiation scenarios. Available retrospective dosimetry methods exhibit different suitability for different exposure scenarios and the most informative method or combination of methods with regard to a specific case and its circumstances should be selected for diagnosis. Finally, the application of several complementary methods is strongly recommended for medical treatment and decision-making support, because most scenarios are of a rather complex nature and each case has its own characteristics. Ideally, this is supported by a strong network of laboratories, which share the workload, enable fast dose reporting and provide a high-quality standard by standardization and inter-laboratory comparisons.