Naturwissenschaften

, Volume 94, Issue 7, pp 517–526

Getting ready for the manned mission to Mars: the astronauts’ risk from space radiation

Authors

  • Christine E. Hellweg
    • DLR, Institut für Luft-und Raumfahrtmedizin
    • DLR, Institut für Luft-und Raumfahrtmedizin
Review

DOI: 10.1007/s00114-006-0204-0

Cite this article as:
Hellweg, C.E. & Baumstark-Khan, C. Naturwissenschaften (2007) 94: 517. doi:10.1007/s00114-006-0204-0

Abstract

Space programmes are shifting towards planetary exploration and, in particular, towards missions by human beings to the Moon and to Mars. Radiation is considered to be one of the major hazards for personnel in space and has emerged as the most critical issue to be resolved for long-term missions both orbital and interplanetary. The two cosmic sources of radiation that could impact a mission outside the Earth’s magnetic field are solar particle events (SPE) and galactic cosmic rays (GCR). Exposure to the types of ionizing radiation encountered during space travel may cause a number of health-related problems, but the primary concern is related to the increased risk of cancer induction in astronauts. Predictions of cancer risk and acceptable radiation exposure in space are extrapolated from minimal data and are subject to many uncertainties. The paper describes present-day estimates of equivalent doses from GCR and solar cosmic radiation behind various shields and radiation risks for astronauts on a mission to Mars.

Keywords

SpaceGalactic cosmic raysSolar cosmic radiationAstronauts’ radiation risk

Introduction

In 1961, the age of manned space flight was initiated by the launch of Yuri Gagarin on board a Russian Vostok rocket. Since that time, numerous manned missions to Earth’s orbit have been carried out routinely by the Americans and Russians and, in 2003, by China, lasting from a few days in capsules to several hundred days in space stations. A highlight in manned space flight was achieved in July 1969, when Neil Armstrong, commander of Apollo 11, set foot upon the Moon, declaring famously that it was “one small step for a man, one giant leap for mankind”. With the beginning of not only a new century but also a new millennium, mankind extends its view to other planets to search for life out there. Life on other planets is not seriously expected to resemble Hollywood’s famous ‘E.T.’. Rather, it is a question of whether cryptic life forms similar to those known from early Earth exist in fluids, soil or rocks.

Over the next few decades, humans will most likely be exposed to different extraterrestrial scenarios (Horneck et al. 2003). The Moon base scenario will consist of a lunar human outpost on the South Pole with constant sunlight illumination and potential resources of water-ice deposits. It could follow a short-term Mars scenario of about 500 days with a 30-day stay on Mars or a long-term Mars scenario of about 1,000 days with a 525-day stay on the Mars surface. Indeed, a manned mission to Mars offers advantages that automated missions cannot provide. Humans, of course, can make their own decisions when it comes to acting on data. This is crucial, as data transmission from Earth to Mars and vice versa could take up to 40 min. For a safe mission to Mars, there are several environmental elements that need to be considered either individually or in combination. The most harmful environmental factors, which will continue to influence future manned space missions, are (1) cosmic ionizing radiation and secondary radiations produced by interaction of the cosmic primaries with atoms and molecules of the atmosphere or of the shielding material as well as the human body itself; (2) solar particle events (SPEs) that occur sporadically and may last more than several days, which cause temporally substantial increases in the radiation dose; (3) reduced gravity of 0.377×g on Mars’ surface, which is experienced by the crew after a trip in microgravity for nearly 1 year and a heavy g-load up to 6×g during landing.

The exceptional radiation field in space

From outer space regions characterized by extreme conditions involving nuclear and atomic reactions (very high temperatures, low-density and high-speed subatomic particles), electromagnetic radiation is emitted consisting of soft X-rays from about 1 to 10 nm of wavelength, and more penetrating hard X-rays from approximately 0.01 to 1 nm. The space ionizing radiation environment of our galaxy is dominated by highly energetic and penetrating ions and nuclei. These particles constitute the primary radiation hazard for humans in space. In the interplanetary space, the primary components of the radiation field (Fig. 1) are galactic cosmic rays (GCR) and solar cosmic radiation (SCR).
https://static-content.springer.com/image/art%3A10.1007%2Fs00114-006-0204-0/MediaObjects/114_2006_204_Fig1_HTML.gif
Fig. 1

Space radiation sources of our solar system. Of special concern for long-duration space missions are GCR and extra-GCR and the electrons, protons and heavy ions of SCR

GCR (Shea and Smart 1998; Badhwar and O’Neill 1994, 1996; Baranov et al. 2002; Pissarenko 1994; Wilson et al. 1999; Bazilevskaya et al. 1994) originates from outside our Solar System and consists of 98% baryons and 2% electrons. The baryonic component is composed of 87% protons (hydrogen nuclei), 12% alpha particles (helium nuclei), and about 1% of heavier nuclei with atomic numbers Z up to 92 (uranium). The energy range of GCR extends over more than 15 orders of magnitude from less than 1 MeV (=106 eV) to more than 1021 eV. When GCR enter our Solar System, they must overcome the outward-flowing solar wind, the intensity of which varies according to an approximately 11-year cycle of solar activity. Hence, the GCR fluxes also vary with the solar cycle, an effect known as solar modulation (Badhwar 1997; Wilson et al. 1989). Differences between solar minimum and solar maximum are a factor of approximately five. GCR flux is at its peak level during minimum solar activity and at its lowest level during maximal solar activity. At peak energies of about 200–700 MeV/u during solar minimum, particle fluxes (flow rates) reach 2 × 103 protons μm−2 year−1 and 0.6 Fe-ions μm−2 year−1. As dose to an individual cell is proportional to the square of the particle’s energy dependent effective charge (Katz et al. 1971), the iron ions contribute nevertheless significantly to the total radiation dose.

SCR consists of low energy solar wind particles that flow constantly from the Sun and the so-called highly energetic SPEs that originate from magnetically disturbed regions of the Sun, which sporadically emit bursts of energetic charged particles (Wilson et al. 1999; Smart and Shea 2003). They are composed predominantly of protons with a minor contribution from helium ions (∼10%) and an even smaller part of heavy ions and electrons (1%). The average 11-year cycle of solar activity can be divided into four inactive years with a small number of SPEs around solar minimum and seven active years with higher numbers of SPEs around solar maximum. During the solar minimum phase, few significant SPE occur, whereas during each solar maximum phase, large events may occur even several times. For example, in cycle 22 (1986–1996), there were at least eight events for proton energies greater than 30 MeV (Smart and Shea 2002; Shea and Smart 1994). SPEs are unpredictable, develop rapidly, and generally last for no more than some hours; however, some proton events may continue more than several days. In a worst case scenario, the emitted particles can reach energies up to several giga-electron-volt per atomic mass unit, and doses received could be immediately lethal for an astronaut in free space.

In the vicinity of the planets with magnetic fields, a third radiation component is present: energetic charged particles trapped by planetary magnetic fields in the so-called radiation or Van Allen belts. The Earth’s belts (Spjeldvik et al. 2002; Heynderickx 2002) are toroidal regions of trapped protons and both inner and outer electron belts contain large fluxes of high energy, ionized particles, including electrons, protons, and some heavier ions.

Radiation effects on humans

The effect of ionizing radiation on human beings (Table 1) can be categorized as either acute or delayed, based on the time frame in which the effects are observed. Acute effects usually appear quite soon after exposure when people receive high doses in a short period of time (minutes to a few days). Delayed effects, such as cancer, can occur when the combined dose and dose rate are too small to cause acute effects leading to death.
Table 1

Radiation effects in humans after whole body irradiation

Radiation effects

Chronic dose

 Risk

  ∼0.4 Sv

First evidence of increased cancer risk as late effect from protracted radiation

  2–4 Sv/year

Chronic radiation syndrome with complex clinical symptoms

Acute Single Dose

 Effect

  ∼0.2 Sv

First evidence of increased cancer risk as late effect

  <0.25 Sv

No obvious direct clinical effects

  >0.5 Sv

Nausea, vomiting

  (>0.7) 3–5 Sv

Bone marrow syndrome: Symptoms include internal bleeding, fatigue, bacterial infections, and fever.

  5–12 Sv

Gastrointestinal tract syndrome: Symptoms include nausea, vomiting, diarrhea, dehydration, electrolytic imbalance, loss of digestion ability, bleeding ulcers

  >20 Sv

Central nervous system syndrome: Symptoms include loss of coordination, confusion, coma, convulsions, shock, and the symptoms of the blood forming organ and gastrointestinal tract syndromes

 Outcome

  ∼0.2 Sv

 

  <0.25 Sv

 

  >0.5 Sv

No early death anticipated

  (>0.7) 3–5 Sv

Death rate for this syndrome peaks at 30 days, but continues out to 60 days. Death occurs from sepsis

  5–12 Sv

Deaths from this syndrome occur between 3 and 10 days post exposure. Death occurs from sepsis

  >20 Sv

No survivors expected

The acute radiation syndrome is a sequence of phased symptoms (Cronkite 1964; Brucer 1964; Chaillet et al. 1993), which vary with individual radiation sensitivity, type of radiation, and the radiation dose absorbed. After radiation exposure with doses >1 Sv, the prodromal stage is characterized by the rapid onset of nausea, vomiting, and malaise, which is followed by a nearly symptom free phase of weeks to days, depending on dose. Humans who have received doses of radiation between 0.7 and 4 Sv will have depression of bone marrow function (haematopoietic syndrome) leading to a decreased resistance to infections from lymphocyte deprivation and anemia within 2–6 weeks and death from sepsis. Death rate for this syndrome peaks at 30 days after exposure, but continues up to 60 days. Higher single doses of ionizing radiation (6–8 Sv) will result in characteristic severe fluid losses, haemorrhage, and diarrhoea (gastrointestinal syndrome) starting after a short latent period of a few days to a week. Derangement of the luminal epithelium and injury to the fine vasculature of the sub-mucosa lead to loss of intestinal mucosa. Without treatment, radiation enteropathy consequently results in an inflammatory response upon infection by bacterial transmigration. Deaths from sepsis may occur between 3 and 10 days post-exposure. After radiation with very high acute doses (20–40 Sv) and a very short latent period from several hours to 1 to 3 days, the clinical picture is of a steadily deteriorating state of consciousness with eventual coma and death (neurovascular syndrome). Symptoms include loss of coordination, confusion, convulsions, shock, and severe symptoms of the blood-forming organ (BFO) and gastrointestinal tract syndromes, survivors cannot be expected.

The chronic radiation syndrome was defined as a complex clinical syndrome occurring as a result of the long-term exposure (Reeves and Ainsworth 1995) to total radiation doses (2–4 Sv/year) that regularly exceed the permissible occupational dose by far. Clinical symptoms are diffuse and may include sleep and/or appetite disturbances, generalized weakness and easy fatigability, increased excitability, loss of concentration, impaired memory, mood changes, headaches, bone pain, and hot flashes. It is hardly conceivable that human missions will be performed, where probabilities of such exposure are non-negligible.

For radiation doses <1 Sv per year, the induction of tumours is the most important long-term secondary disorder (Pierce et al. 1996; IARC Study Group on Cancer Risk among Nuclear Industry Workers 1994; Zaider 2001; Brooks 2003). Tumour induction with low doses is considered to occur stochastically, which is a consequence of statistical probability. Nevertheless, most of the data used to construct risk estimates are taken from radiation doses greater than 1 Sv and then extrapolated down for low-dose probability estimates. Significant direct data are not available for absolute risk determination of doses less than 0.1 Sv. In the case of the various radiation-induced cancers seen in humans, the latency period may be several years up to two to three decades. It is difficult to address the radiation-induced cancer risk on Earth of an individual person due to the already high background risk of developing cancer. Even less is known on cancer risk from complex space radiation.

Radiation as a risk factor for humans in space

Radiation risk as a factor for humans in space falls in two categories: It can nearly immediately affect the probability for successful mission completion (mission criticality), and it can result in radiation late effects in the individual space traveler. In both cases, risk is considered to be a monotonic, increasing function of dose and, thereby, correlated to space environmental parameters and mission duration. It is known that risk avoidance is the best protection strategy, although it is nearly impossible to avoid risk completely. True avoidance can significantly be promoted by careful mission planning in respect to duration and temporal position within the solar activity cycle or by surrounding crew habitats with sufficient absorbing matter. Accordingly, the most effective avoidance countermeasure is to restrict radiation exposure to as low as possible.

Up to now, manned missions outside the shield provided by the Earth’s geomagnetic field were limited to the short visits of the lunar surface by the Apollo crews. Fortunately, during these excursions the normal, ‘steady’ state of the cosmic radiation field prevailed, thereby the Apollo crews were spared to exposure to higher fluxes of SPEs. Long-term manned missions started with the Skylab crews were extended by the Mir cosmonauts and will further expand during the utilization of the International Space Station (ISS). From previous missions, a lot of data on the radiation fields (Pissarenko 1994), qualities, and dose distributions (Curtis et al. 1986; National Council on Radiation Protection and Measurements 1989), as well as some data on biological endpoints (Brooks 2003), such as chromosome aberration frequencies (Obe et al. 1997; Testard and Sabatier 1999; Cucinotta et al. 2000; George et al. 2003, 2005), are available for near Earth orbits.

During the colonization of the Moon or Mars, the relative shielding protection of low Earth orbit (LEO) has to be replaced by measures that are both available and effective within the spacecraft or habitat itself. Towards this goal, quantitative estimates of the biologically effective radiation doses and the corresponding estimates of the radiobiological effects in man and their impact on the performance and life expectancy of the space crew have to be developed for each mission. Tables 2, 3 and 4 gives radiation equivalent doses for the skin, the ocular lens, and the BFOs estimated for different phases of the mission scenarios as well as the radiation limits recommended for the ISS in LEO (Horneck et al. 2003; National Council on Radiation Protection and Measurements 1989; Townsend et al. 1992; Simonsen et al. 2000; Badhwar et al. 1992) for comparison. As these mission scenarios do not encounter trapped radiation, apart from the exposure incurred during the short crossings of the terrestrial radiation belts, this contribution may be neglected compared to the remaining radiation levels. A component that only affects radiation exposures indirectly is the solar wind, which modulates the primary galactic heavy ions according to the rhythm of solar activity. For assessing the radiation exposure, the following parameters are also relevant: mission duration, solar distance, solar cycle time, mass shielding, and mission phase.
Table 2

Radiation tissue-equivalent doses from galactic cosmic rays for the specified mission scenarios

Mission doses from GCR for reference scenarios taking fluxes of 1977 for solar minimum and of 1970 for maximum solar activity

Shield/thickness

Solar activity

BFO-dose equivalent rate (mSv a−1)

BFO-mission equivalent doses (Sv)

Free space

Luna

Mars

Luna (190 days)

Mars (450 days)

Mars (947 days)

Pressure vessel

Minimum

711.7

355.9

119

0.195

0.828

0.993

1 g cm−2 Al

Maximum

271.7

135.9

61

0.074

0.317

0.402

Equipment room

Minimum

646.9

323.5

119

0.177

0.754

0.918

5 g cm−2 Al

Maximum

255.6

127.8

61

0.070

0.299

0.383

Shelter

Minimum

589.0

294.5

119

0.161

0.687

0.852

10 g cm−2 Al

Maximum

239.5

119.8

61

0.066

0.280

0.364

Shelter

Minimum

517.6

258.8

119

0.142

0.605

0.769

20 g cm−2 Al

Maximum

217.7

108.9

61

0.060

0.255

0.339

Table 3

Radiation tissue-equivalent doses for the specified mission scenarios

Mission doses from SPEs for reference scenarios assuming worst case exposure conditions (23rd February 1956 event as approximated by 10× flux of 29th September 1989 event)

Shield/thickness

Tissue

Tissue-mission equivalent doses (Sv)

Free space

Luna

Mars

Space Suit

Skin

295.10

147.55

0.45

0.3 g cm−2 Al

Ocular lens

81.30

40.65

0.44

 

Blood forming organs

4.21

2.11

0.32

Pressure vessel

Skin

64.40

32.20

0.44

1 g cm−2 Al

Ocular lens

35.50

17.75

0.42

 

Blood forming organs

3.52

1.76

0.31

Equipment room

Skin

6.48

3.24

0.38

5 g cm−2 Al

Ocular lens

5.54

2.77

0.37

 

Blood forming organs

1.93

0.97

0.28

Shelter

Skin

2.62

1.31

0.33

10 g cm−2 Al

Ocular lens

2.43

1.22

0.32

 

Blood forming organs

1.26

0.63

0.25

Table 4

Radiation tissue-equivalent doses for the specified mission scenarios

NCRP (NASA) organ dose limits for low earth orbit missions (age and gender averages)

Tissue

Limits (Sv)

30 Days

Annual

Career

Skin

1.50

3.0

6.0

Ocular lens

1.00

2.0

4.0

Blood forming organs

0.25

0.50

(Age/a − 30) × 0.075 + 2 (male)

   

(Age/a − 38) × 0.075 + 2 (female)

These estimates for the total radiation doses conceivably to be incurred at the BFOs from galactic heavy ions (Table 2) and from SPE irradiation during the mission scenarios (Table 3), which indicate that some exposure levels, e.g. for SPEs, exceed the limits set forth in radiation protection guidelines for LEO, e.g. the ISS operations. The latter have been developed with the aim of keeping the radiation-induced lifetime excess late cancer mortality below 3% and are subject to the additional constraint that any exposure must be kept “as low as reasonably achievable” (ALARA principle). The exposure levels for GCR are fully compatible with the exposure limits recommended for LEO activities (Table 4). However, acute doses deposited in critical organs when encountering a large SPE may reach some few hundred millisievert, and it cannot easily be ruled out that symptoms of early morbidity (acute radiation sickness) will be induced behind 5 g/cm2 Al and even in the so-called shelter with 10 g/cm2 Al shielding, as shown for the doses deposited by the worst case reference event in deep space (Table 3). If indeed, a worst case SPE would be encountered without sufficient shielding symptoms, as shown in Table 1, would appear with near certainty. Symptoms of the prodromal phase resemble those of the early space adaptation syndrome, and also, quantitative relations to the symptoms of motion sickness have been established. As such, these symptoms will hardly pose a serious threat, unless, e.g. emesis occurred in a space suit. Accordingly, mission planning could minimize but not eliminate the risk due to predictive uncertainties by choosing the temporal position of a mission with respect to the cycle of solar activity.

Early health effects from acute irradiation have the potential to degrade crew performance and, hence, to interfere with mission success, whereas late effects will not ensue until years, sometimes decades, after completion of the mission. As for early effects, protraction of exposure reduces the effectiveness of a given dose of (sparsely) ionizing radiation to induce a given effect.

A well-known late effect from space radiation with higher doses is the induction of lens opacities (Lett et al. 1994; Rastegar et al. 2002; Wu et al. 1994). The threshold for detectable cataract formation is about 2 Sv for acute sparsely ionizing radiation doses and 15 Sv for protracted doses. Although all types of ionizing radiation may induce cataract formation, densely ionizing radiation is especially effective in its formation, even at relatively low doses below 2 Sv. Given the advances in treating and effectively ‘curing’ late cataracts of the ocular lens, the original consideration of this condition as a critical health risk does not seem warranted in the context of pioneering space exploration, especially against the background of an overall mission death risk of some percent.

Present guidelines for radiation protection for space missions have been derived by starting from a postulated ‘acceptable’ risk for late effects (National Council on Radiation Protection and Measurements 1989; Fry 1994), such as enhanced morbidity or mortality from malignant cancers, which occur up to 20 years after exposure. Fatal neoplasm of the BFOs give rise to one of the most frequent radiogenic cancer, i.e. leukaemia, which in addition, has the smallest latency times of radiogenic cancers of the adult (Ron 1998). The dose response function linking the dose of an exposure to the probability of producing a (fatal) cancer and, thereby, reducing life expectancy is described by a straight line through the origin, visualizing, thereby, the “linear, no threshold” (LNT) hypothesis of applied radiation protection for terrestrial exposure. The presently recommended value for the slope of that line is 4 × 10−2 Sv−1 for the induction of excess cancer mortality induced by chronic irradiation. These data are connected with a relative uncertainty concerning radiation quality, relative biological effectiveness, and further modulating cellular mechanisms.

The development of a tumour as a radiation late effect is still poorly understood, but it is clear that many stages are involved. The first stage, induction or initiation, can definitely be caused by radiation, although its role in promotion and progression is not yet clear. Cellular repair, tissue reactions, and the immune defence reduce the radiation-related cancer risk under normal circumstances. If microgravity interfered with these processes, the radiation hazard in space would be higher than on Earth. A specific indirect influence of the spaceflight environment on radiation-induced cancer mortality might be brought about by the reduced competence of the immune system, which is caused by prolonged exposure to microgravity (Sonnenfeld 1999; Kiefer and Pross 1999; Todd et al. 1999; Sundaresan et al. 2004). Lymphocyte locomotion is integral to the immune response and is adversely affected in microgravity and modelled microgravity (Pellis et al. 1997). At least for those tumours whose promotion and final expression can be controlled or suppressed by the body’s immune response, the possibility that its reduced capacity may enhance expression of tumours is evident. Moreover, impaired immune competence, which might result from higher levels of acute radiation exposure, is a risk of its own in closed environments in which crews will be locked in for significant periods. This is a special problem as closed environments might be subjected to increased burdens of microbes.

Effects of radiation on the cellular level

The acute and the delayed biological effects of radiation on human beings are a consequence of chemical reactions initiated by energy deposition in cells and tissues. These reactions modify the division processes by which cells reproduce, as well as other cell functions required for healthy living organisms. Cells have the ability to repair themselves; when that repair is successful, the tissues and organisms return to their normal state (Friedberg et al. 2006). When the repair is not successful, cells may die. If a sufficiently large number of cells are killed, tissue integrity and function may be impaired, as occurs in acute radiation effects. Repair may be successful from the point of view of cell survival but may contain latent errors that only manifest in subsequent generations of dividing cells (Goodhead 1994; Friedberg 1996; Lambert et al. 1998; Smiraldo et al. 2005).

For the particles composing space radiation, energy deposition is highly localized along the trajectory of each particle. The high rate of energy deposition per unit length of trajectory (linear energy transfer (LET): kilo-electron-volt per micrometer) changes as a function of the particle’s kinetic energy. High-energy charged particles lose energy when they traverse any material, even the human body. GCR particles of average energy can penetrate a substantial thickness of materials, and their lighter secondary products will be able to penetrate even further. For this reason, the biological effectiveness of radiation will change as a function of depth of penetration.

High atomic number and high-energy particle (HZE) radiation-induced lesions and cellular radiation responses have to be studied at the molecular and cellular level. Of special interest are studies that determine the nature of the lesions induced (both immediate and persistent), the mechanisms by which the lesions are formed, and the potential consequences of the induced damage to cellular processes. As a reactive chemical species, DNA is the target of numerous physical and chemical agents. As a result of exposure to ionizing radiation, a broad spectrum of DNA lesions is induced in cellular DNA dependent on the type, quality, and dose of radiation (Kraft 1987; Baumstark-Khan et al. 1993, 2005; Hada and Sutherland 2006). Radiochemical injuries of relevance for biological functions induce nucleotide base damages, cross-linking, and DNA single- and double-strand breaks. Radiation-induced DNA damage is able to affect cellular reproduction and conservation of genetic stability, with the result of cellular inactivation and mutation induction. To maintain genomic integrity, human cells have developed elaborate pathways to detect, signal, and repair DNA damage (Wood 1996; Karagiannis and El-Osta 2004; Collis et al. 2005; Reddy and Vasquez 2005). Accordingly, studies of damage identification, processing, and repair are of great importance. It is hypothesized that altered expression of cell cycle regulatory proteins or stress response genes may represent important early events in the process of oncogenic transformation in vitro. It is known that ionizing and non-ionizing radiations are able to induce certain cellular genes and decrease the expression of others (Park et al. 2002; Chaudhry et al. 2003; Snyder and Morgan 2004), giving rise to the induction of compensatory mechanisms, such as of DNA repair, apoptosis, protein expression and processing, and signal transduction and intercellular signalling. This modulation of gene expression is thought to be mediated to some extent by promoter-specific interactions and regulatory elements like transcription factors. The complex molecular responses to radiation are mediated by a diversity of regulatory pathways. It has been shown that pro- and anti-apoptotic signals were simultaneously induced, with the pro-apoptotic pathway mediated by p53 targets, and the pro-survival pathway by nuclear factor-κB (NF-κB) targets (Rashi-Elkeles et al. 2006). Defects in both pathways are related to tumour development; p53 defects result in abortive cell cycle control and inappropriate NF-κB activation, or overexpression of NF-κB-dependent target genes gives surviving cells with residual DNA damages additional growth advantages. Intracellular signal transduction pathways, cytoskeletal organization, as well as lymphocyte proliferation rates and activation have been reported to be altered by microgravity (Schmitt et al. 1996; DeGroot et al. 1991; Pippia et al. 1996). On the other hand, repair processes are under cellular control and may depend on transcriptional activity. This has so far been shown for excision repair of UV damage (Hanawalt 1994) but may also exist for other pathways. If microgravity interfered with such processes, the radiation hazard in space would possibly result in increased tumour development as late response.

Biological dosimetry for post-flight analysis of radiation exposure

The problem of potential hazard to astronauts from cosmic ray HZE particles became “visible” when the astronauts of the Apollo 11 mission, returning from the Moon, reported on light flashes, i.e. faint spots and flashes of light at a frequency of one or two per minute after some period of dark adaptation. Evidently, these light flashes result from HZE particles of cosmic radiation penetrating the spacecraft structure and the astronaut’s eyes and producing visual sensations through interaction with the retina. Recently, investigations on the frequency of visual light flashes in LEO and its dependence on orbital parameters were performed on Skylab 4 and on Mir (Casolino et al. 2003). The light flash phenomenon gives an example that HZE particle hits are “seen” by the astronaut. The question arises what happens to the other organs or tissues of the body exposed to cosmic radiation. Of special concern is the BFO, where the damage to stem cells may result in long-living biological markers for assessing radiation exposure.

In astronauts, after long-term spaceflights, an elevation of the frequencies of chromosomal aberrations in peripheral lymphocytes has been reported. Obe et al. (1997) investigated lymphocytes of seven astronauts that had spent several months on board the Mir space station. They showed that the frequency of dicentric chromosomes increased by a factor of approximately 3.5 compared to pre-flight control, and that the observed frequencies agreed quite well with the expected values based on the absorbed doses and particle fluxes encountered by individual astronauts during the mission. These data suggest the feasibility of using chromosomal aberrations as a biological dosimeter for monitoring radiation exposure of astronauts. Chromosome aberration dosimetry can detect radiation damage during spaceflight, and biological measurements support the current risk estimates for space radiation exposure. However, for cosmonauts involved in multiple space missions, the frequency of chromosomal aberrations is lower than expected, suggesting that the effects of repeated spaceflights on this particular endpoint are not simply additive (Durante et al. 2006).

Another promising technique for the determination of mission-related DNA damage is premature chromosome condensation that allows interphase chromosome painting and the detection of non-rejoining chromatin breaks without going through the first mitosis (George et al. 2003). This method is especially relevant for biological dosimetry for astronauts that are exposed to high doses of high-LET radiation in space, which may induce interphase death and cell cycle delay.

In addition to chromosomal aberrations, other intrinsic biomarkers for genetic or metabolic changes may be applicable as biological dosimeters (Horneck 1998), such as germ line minisatellite mutation rates or radiation-induced apoptosis, metabolic changes in serum, plasma, or urine (e.g. serum lipids, lipoproteins, ratio of high- and low-density lipoprotein cholesterol, lipoprotein lipase activity, lipid peroxides, melatonin, or antibody titers), hair follicle changes and decrease in hair thickness, triacylglycerol-concentration in bone marrow, and glycogen concentration in liver. Whereas the first three systems mentioned are non-invasive or require only blood samples for analysis, the latter systems are invasive and, therefore, appropriate for radiation monitoring in animals only. Dose response relationships have been described for most of the intrinsic dosimetry systems, yet their modification by microgravity remains to be established.

Countermeasures for radiation protection

To minimize the risk from space radiation during exploratory missions, future research and development is required within the following categories: (1) an adequate quantitative risk assessment for accurate mission design and planning to minimize the expectation value of healthy lifetime lost (HLL), (2) the surveillance of radiation exposure during the mission for normal and alarm operational planning and for record keeping, (3) surrounding crew habitats with sufficient absorbing matter; and (4) countermeasures to minimize health detriment from radiation actually received by selecting radiation resistant individuals or by increasing resistance, e.g. by radio protective chemicals. The opposite selection process, whereby individuals with identifiable genetic disposition for increased susceptibility to spontaneous and implied radiogenic cancerogenesis are detected, will, in any case, be part of the standard crew selection.

In addition to the standard countermeasures, such as avoidance of exposure by adequate shielding and mission planning (Huntress et al. 2006) or by chemo-protective and even nutritional measures (Weiss and Landauer 2003), it is important to foster radiobiological research activities with the aim to reduce significantly the uncertainty of our risk estimates. These uncertainties are related to the potentially unique radiobiological properties of galactic heavy ions or to the possible modifications of space radiation effects, either synergistically or antagonistically, brought about by the changes in the whole body status during spaceflight. This status is not only shifted to a new set point by microgravity but may also be altered in response to general stress, including psychological stress. Terrestrial research at heavy ion accelerators (Virtanen 2006) will have to focus on the effects of single heavy ions on individual cells as they are investigated, e.g. in the recently developed micro beam techniques (Folkard et al. 2003). A definite answer concerning the modification of radiation effects by the exposure conditions in space, however, will only be found in properly designed experimental studies on the ISS or on a lunar base. Finally, the criteria presently used in deriving space radiation exposure limits need to be redefined to allow for an integrated, unified risk management and design approach that, among other advantages, explicitly considers the repercussions of radiation protection measures like shielding design or mission planning on the overall mission success probability. The (probabilistic) expectation value of the HLL, i.e. the number of healthily lived years lost due to an exploratory space mission, would serve such purposes more neatly than the presently invoked criteria, and its minimization would allow for a combined balanced treatment of early and late radiation effects on an equal footing.

Conclusion

Radiation-induced cancer is one of the main late effects for manned space exploration. Predictions of the nature and magnitude of risks posed by exposure to radiation in space are subject to many uncertainties. In future years, worldwide efforts will have to focus on an increased understanding of the oncogenic potential of space radiation and of the underlying biological processes and their disruption by space radiation.

Copyright information

© Springer-Verlag 2007