Importance of medical exposures

Modern medicine offers a variety of diagnostic methods and tools that include imaging techniques where patients are exposed to ionizing radiation such as X-radiography, CT scans, PET and others. In many countries, for example, the use of CT scans has continuously increased representing today an indispensable tool in X-ray diagnostics (UNSCEAR 2010). As a result, in particular in developed countries with health-care level I, even if averaged over the whole population of a certain country, medical exposures are largely responsible for exposure from manmade sources of ionizing radiation (UNSCEAR 2010).

Recent studies suggest cumulative effective doses of more than 100 mSv from CT scans

Recently, a series of publications on patients undergoing recurrent computed tomography (CT) scans have highlighted that large number of patients are falling in a relatively higher dose group of exceeding 100 mSv of effective dose (Rehani et al. 2019a, b; Brambilla et al. 2019). The authors used data from 342 hospitals located in the USA and Central Europe, and focused on patients whose cumulative effective dose from CT scans alone exceeded 100 mSv. They argued that, although effective dose is not the perfect means to quantify partial body exposures, values of effective dose of more than 100 mSv make it rather likely that various organs may receive absorbed organ doses of more than 100 mGy. Altogether, the study includes more than 2.5 million patients and almost 5 million CT scans. Interestingly, more than 1% of patients undergoing CT examinations received cumulative effective doses of more than 100 mSv, and the minimum time period to accumulate 100 mSv from the CT scans in some patients was a single day. In terms of median organ doses, data from one institute (8952 patients) suggest values of 174, 119, 34, and 42 mGy for lungs, red bone marrow, eye lens and breast, respectively. Corresponding maximum organ doses were 2.5, 0.7, 5.9 and 2.8 Gy, respectively.

Organ doses of more than 100 mGy do matter—evidence from the atomic bomb survivor studies

A recent series of publications on solid cancer incidence among atomic bomb survivors, published by the Radiation Effects Research Foundation, demonstrate statistically elevated levels of excess relative risk (ERR) for a number of cancer end points, at organ-absorbed doses of about 100 mGy. Grant et al. investigated the incidence of all solid cancers combined, among a cohort of more than 105,000 survivors with a follow-up period from 1958 to 2009, and found a sex-averaged ERR per Gy of 0.47 [95% confidence interval (CI): 0.39–0.55] among this cohort when a linear dose response model was applied (Grant et al. 2017). A closer look revealed that the lowest dose range with a statistically significant dose response was that with doses of less than 100 mGy, when absorbed doses to the colon were used. For higher doses, the dose response relationship showed a linear increase in ERR with dose for females, while the dose response was linear quadratic for males. These overall results were confirmed recently by Cologne et al. (2019), although these authors found that combining major solid cancer sites is not the optimal analysis, due to heterogeneity in corresponding individual baseline risk. They recommended analysis of single cancer sites if statistically possible. Along these lines, Cahoon et al. (2017) analysed the same cohort as Grant et al., but focused on lung cancer incidence as an end point. These authors found a significant and linear dose response when they plotted the ERR for lung cancer incidence risk versus weighted lung dose, down to a dose interval from 100 to 200 mGy. Breast cancer incidence risk among the cohort was followed up by Brenner et al. (2018). No significant departure from linearity was observed for the full dose range of weighted breast doses. Furthermore, for lower doses down to about 250 mGy, the estimates for the ERR per unit dose were stable. When uterine cancer incidence was investigated in this cohort, no radiation-induced effect on cervical cancer incidence was found, while there was a moderate indication of a linear dose response for the incidence of uterine corpus cancer (Utada et al. 2019). Sugiyama et al. (2019) investigated colorectal cancer incidence among the cohort of atomic bomb survivors. While these authors could not find any radiation-induced effect for rectum cancer incidence, they did find a significant dose response for colon cancer incidence down to a few hundred mGy absorbed dose to the colon. Finally, a follow-up was done by Sadakane et al. (2019) on cancer incidence in the liver, biliary tract and pancreas. A significantly elevated ERR per unit dose was found only for liver cancer, again with a largely linear dose response down to a few hundred mGy of absorbed dose to the liver.

All these studies suggest that doses of low-LET radiation greater than 100 mGy, as reported by Rehani et al. (2019a) to be present among patients receiving CT scans, do indeed matter in terms of radiation-induced cancer risk.

Are studies on atomic bomb survivors relevant in the present context?

One might argue that the exposure of the atomic bomb survivors is not really relevant here, because cumulated exposure from CT scans is due to low-energy X-rays and might be distributed over several single exposures, while the atomic bomb survivors were exposed to high-energy gamma radiation (plus some contributions from neutrons) and within a single very short period. Indeed, the mean energy of gamma radiation that was present at the bombings of Hiroshima and Nagasaki was of the order of several MeV (Rühm et al. 2018a) which is much higher than typical energies of diagnostic X-ray examinations. A recent report of the National Council on Radiation Protection and Measurements (NCRP) has reviewed the biological effectiveness of low-energy photons for evaluating human cancer risk and found that radiation used in X-ray diagnostics might be a factor of about 1.5 more effective than 1.25 MeV gamma radiation emitted by the decay of 60Co (NCRP 2017). Furthermore, recent meta-analyses of epidemiological studies where cancer risk among cohorts exposed to low dose rates (including nuclear workers, populations living in contaminated regions such as the Techa River cohort, etc.) were compared to the cancer risks observed among the atomic bomb survivors did not find much difference in cancer risk estimates (Shore et al. 2017; Hoel 2018). These findings suggest that the data described above on cancer risk among atomic bomb survivors are indeed relevant and can be used to estimate cancer risk of several 100 mGy observed by Rehani et al. among patients who received several CT examinations. It should be emphasized, however, that the risks deduced from the atomic bomb survivors might underestimate the risk from CT examinations somewhat, due to the higher relative biological effectiveness of diagnostic X-rays as compared to the higher-energy gamma radiation typical for the exposure of the atomic bomb survivors.

Combined doses from radiotherapy and associated imaging

In many countries, a considerable fraction of the population will face a cancer diagnosis at a certain time in life, and radiotherapy represents one of the major methods of treatment. Approximately, half of all cancer patients will receive radiotherapy at some point in their illness. There is therefore a large global population of patients who are exposed to high target doses (mainly using photon beams, but increasingly with protons) in a controlled and well-documented way. The basic physics of radiation interactions makes it unavoidable that lower doses are delivered to healthy organs and tissues in other parts of the body. These out-of-field doses are influenced by numerous factors, including the type of treatment machine, the energy of the radiation, the planned spatial distribution of radiation within the body and the size and shape of the patient. In all cases, doses can vary from tens of gray to milligray. All parts of the dose–risk curve for subsequent cancer induction are therefore involved, from low doses including regions where non-linear mechanisms have been postulated (e.g. bystander effects), through the region defined largely by the Japanese lifespan study (Ozasa et al. 2012; Grant et al. 2017), to the further non-linear region at high doses where cell kill and re-population effects are known to occur (Hall and Henry 2004; Schneider and Walsh 2008).

Although the target doses in radiotherapy are accurately known and delivered, modern radiotherapy techniques (such as intensity-modulated radiotherapy (IMRT), image-guided radiotherapy (IGRT) and volumetric-modulated arc therapy (VMAT) and Tomotherapy™) often include substantial associated imaging for tumour localization, treatment planning and verification of the treatment fields during the course of radiotherapy. In fact, the data from Rehani et al. (2019b) indicate that nearly 90% of the patients in their single institution study had been diagnosed with a malignancy. A substantial fraction of this group is likely to have radiation therapy in which the associated imaging dose will augment the non-target dose from radiotherapy alone.

Doses to organs and tissues outside the planning target volume arise from scatter of the radiotherapy beams within the patient (for X-rays generated in the range 4–10 MV), leakage radiation from the X-ray target and scatter from the collimators. In general, the doses and risks to healthy organs and tissues have long been considered acceptable in view of the benefits of radiotherapy, provided that doses to critical organs are minimized at the planning stage. Magnitudes vary, but the risk of fatal secondary malignancy from radical prostate treatment using X-rays has been estimated as approximately 2–5% (Kry et al. 2005). However, with the introduction of imaging equipment as an integral part of the linear accelerator (onboard imaging using kV X-rays or the treatment beam itself) (Murphy et al. 2007) and concomitant imaging using diagnostic CT, the out-of-field doses from the treatment beams are augmented by doses from these imaging investigations. As an approximate guide, Shah et al. (2012) report doses of 5–30 mGy/image for onboard kV imaging (depending on the anatomical site) and < 10 mGy/image for MVCT imaging (Tomotherapy™). As an example, from out-of-field data for a simulated treatment of a paediatric brain (Majer et al. 2017), and assuming that 20 images are performed during the total treatment (giving a total imaging dose of ~ 200 mGy), the % of the total dose due to imaging is < 1% at the field edge, ~ 50% at 12 cm from the field edge and ~ 80% at 27 cm from the field edge (assuming, of course, that the imaging field extends to these distances). The imaging dose assumed here is the same order of magnitude as the doses from CT scans observed by Rehani et al. (2019a, b). Furthermore, the studies of Diallo et al. (2009) and Dörr and Herman (2002) are of considerable relevance here. Diallo et al. showed that the incidence of second cancers in paediatric radiotherapy patients is concentrated around the treatment field edge, in a region of high dose gradient which makes accurate retrospective dosimetry difficult. They found that most second cancers occur within approximately ± 10 cm from the field edge. In the above example, using the data of Majer et al. (2017), the total imaging dose is greater than the radiotherapy component at the same point for distances > 12 cm from the field edge. At distances from the field edge < 10 cm, the imaging dose is progressively less than the radiotherapy component but can still be significant. To understand the radiobiological processes occurring in this region, it is therefore necessary to determine the total imaging dose. This is of particular relevance given that there are already reports that cancer risks might be increased among those who received CT scans in childhood (Pierce et al. 2012; Mathews et al. 2013). Consequently, a pooled European study on CT scans among children has been initiated recently (Bosch et al. 2015; Bernier et al. 2019).

Future challenges …

There are several challenges in arriving at a combined dose to the patient from therapy and imaging. First, it should be stressed that all out-of-field doses are subject to many variables associated with the treatment machine design, radiation properties, treatment planning technique (including imaging) and, importantly, the very wide variations associated with patient size and anatomy. This makes generalization difficult in practice. Furthermore, the examples quoted above are for radiotherapy using megavoltage X-ray beams and the increasing use of proton and particle beam therapy raises many further issues, e.g. the contribution of secondary neutrons and the low out-of-field doses. However, such discussions are beyond the scope of this editorial.

The second challenge is that imaging doses and risks frequently use the concept of effective dose as a surrogate for “risk”, even though this quantity is problematical when applied to medical exposures. Effective dose uses mean organ doses from several organs with defined risk factors, in contrast to the major region of interest in radiotherapy near the field edge. Effective dose is also independent of age at exposure and is age, sex and population averaged, making its application to specific patient groups, such as children, dubious. Moreover, Diallo et al. (2009) reported a significant incidence of sarcomas in their field edge study, but in the definition of effective dose, sarcomas are remainder organs with an ill-defined risk factor. In any case, effective dose assumes the linear no-threshold (LNT) hypothesis, which may not be valid at doses > 2 Gy close to the field edge (Schneider and Walsh 2008).

These problems have prompted efforts to model the combined dose from radiotherapy and imaging using Monte Carlo techniques and to test the results experimentally by simulating the treatment using anthropomorphic phantoms loaded with thermoluminescence (TL) and other passive dosemeters. This is, for example, outlined by the European Radiation Dosimetry Group (EURADOS), building upon previous experimental studies of out-of-field doses in radiotherapy (Majer et al. 2017; Stolarczyk et al. 2018; Knesevic et al. 2018). A summary of recent findings of these efforts can be found in Rühm et al. (2019).

… And the way to go

The development of dosimetry techniques and the measurement of doses from both diagnostic imaging involving ionizing radiation and radiotherapy are therefore important pre-requisites for advancing this field of study. Epidemiological studies of second cancers (and also for long term non-cancer effects) following radiotherapy, in particular for those exposed at young age, require a specification of dose to the patient at the site of the subsequent malignancy, making out-of-field dosimetry an important field of dosimetric development (Harrison 2017; Harrison et al. 2017). In combination with doses a patient might receive from various applications of X-ray diagnostics, total dose estimates will be indispensable in any epidemiological study on second cancer risk among cancer survivors. Such studies will gain attention in the future, given (a) that the most important radio-epidemiological study to date, the lifespan study on the atomic bomb survivors, will come to an end in the next 20 years or so when most of the survivors have died, (b) that the number of cancer patients treated with ionizing radiation is expected to increase worldwide, and (c) that the number of cancer survivors will increase with increasing treatment success. Studies such as those of Rehani and co-workers (Brambilla et al. 2019; Rehani 2019; Rehani et al. 2019a, b) and Harrison (Harrison 2017; Harrison et al. 2017) pave the way to go. This is consistent with one of the major visions formulated by the European Radiation Dosimetry Group (EURADOS) in their Strategic Research Agenda where the major challenges in dosimetry for the next 20 years have been summarized (Rühm et al. 2016).