Nuclear Energy pp 361-367 | Cite as


  • John W. PostonSr.
Reference work entry
Part of the Encyclopedia of Sustainability Science and Technology Series book series (ESSTS)


Absorbed dose

The amount of energy deposited by ionizing radiation per unit mass of the material. Usually expressed in the special radiologic unit rad or in the SI unit the gray (Gy). One Gy equals 1 J/kg or 100 rad.


Any device worn or carried by an individual to establish total exposure, absorbed dose, or equivalent (or the rates) in the area or to the individual worker while occupying the area.

Equivalent dose

(Formerly the dose equivalent) The product of the absorbed dose and the radiation-weighting factor (formerly the quality factor) for the type of radiation for which the absorbed dose is measured or calculated. The equivalent dose is used to express the effects of radiation-absorbed dose from many types of ionizing radiation on a common scale. The special radiologic unit is the rem or in the SI unit the sievert (Sv). One Sv is equal to 100 rem.


A quantity defined as the charge produced in air by photons interacting in a volume of air of known mass. An old quantity that is generally no longer used. Also, a general term used to indicate any situation in which an individual is being irradiated.


The process of removing one or more electrons from an atom or a molecule. The positively charged atom and the negatively charged electron are called an ion pair.


One of two or more atoms with the same number of protons but a different number of neutrons in their nuclei. Isotopes of the same chemical element have the same chemical properties but have, usually, very different nuclear properties.


A general term to indicate an atomic nucleus characterized by its atomic number (number of protons), number of neutrons, atomic mass, and energy state.


Used in this section to mean ionizing radiation. That is, particles or electromagnetic radiation emitted from the nucleus with sufficient energy to cause ionization of atoms and molecules composing the material with which the radiation is interacting.


An isotope of a chemical element that is unstable and transforms (decays) by emission of nuclear particles and electromagnetic radiation to reach a more stable state.


Another name used for a radioisotope. A radioactive nuclide.

Definition of the Subject

Dosimetry is best defined as “the theory and application of principles and techniques associated with the measurement of ionizing radiation” [1].


The term “dosimetry” can be best explained by assuming it was derived from combining two words: “dose” and “measurement.” The word dose is shorthand for several quantities associated with the profession of health physics (i.e., radiation protection and safety). The terms include the “absorbed dose,” which is a measure of the energy deposited per unit mass of material, and the “equivalent dose,” which includes consideration of the biological effects of different radiations, when the same absorbed dose is delivered to matter. The term “equivalent dose” is now used instead of the older term “dose equivalent” to signify changes in the ICRP-recommended radiation and tissue weighting factors. There are many other “dose terms” used in health physics, but these will not be included here because the fundamental quantity associated with dosimetry is the absorbed dose. Of course, the term measurement implies the use of some sort of detector that is sensitive to the ionizing radiation being measured. These “detectors” can take many forms from photographic film, first used more than 100 years ago, to sophisticated solid-state detectors being introduced today.

Scientists have been detecting radiation for more than a century using a wide variety of detectors. Initially, the detectors were either photographic film or simple ionization chambers filled with air. Crude scintillation systems led to the invention of detectors such as the Geiger-Müller counter and more sophisticated proportional counters and detectors designed for specific applications and/or to detect a specific radiation. A discussion of these detectors would fill a textbook [2, 3, 4, 5] and see also entry C. Radiation Detection Devices (in this encyclopedia). For this reason, this discussion of dosimetry will focus on two of the more modern dosimeters used to monitor the absorbed dose to occupationally exposed workers in nuclear facilities across the United States.

As indicated above, dosimetry is best defined as “the theory and application of principles and techniques associated with measurement of ionizing radiation” [1]. In reality, two basic areas encompass the term “dosimetry.” These are called external dosimetry and internal dosimetry. Again, these terms are shorthand descriptions of the more complex exposure conditions being considered. External dosimetry simply means the measurement of radiation that exists outside the human body. Basically, this type of dosimetry uses radiation detection devices and instrumentation to establish the characteristics of the radiation field. These measurements provide information in many forms, for example, the energy or energy spectrum of the radiation, the radiation intensity, the types of radiation present, and other useful information. In many cases, the radiation detectors used for these measurements are called “dosimeters,” an indication that the sole purpose is to measure the radiation-absorbed dose and which leads to an estimate of the equivalent dose. It is important to remember that, because the radiation source and the dosimeters are both outside the body, the measurement does not provide a direct measurement of the absorbed dose to the organs of the body. Methods used to provide estimates of the absorbed doses to organs and tissues of the body will be discussed later.

When a radionuclide or radionuclides are taken into the body, through inhalation, ingestion, injection, or assimilation through the intact skin, there is a completely different set of challenges facing the dosimetrist. Internal dosimetry is defined as “a process of measurement and calculation that results in an estimate of the absorbed dose to organs and tissues of the body from an intake of radioactive material” [1]. Internal dosimetry is primarily confined to the use of mathematical models and calculational techniques based on an internationally agreed upon set of standard assumptions. The dose estimate relies on mathematical models that describe the uptake, distribution, and retention of the radioactive material in the body. However, the calculations may be based on a set of measurements, such as the concentration of airborne radioactivity in a work area, the activity of radioactive material deposited in the body or specific organs in the body, or measurement of the concentration of radioactivity in excreta, such as urine or feces. Even with these measurements as initial input, internal dosimetry must rely on models of a reference human and calculational techniques. These aspects will not be discussed here.

This section will focus on a discussion of external dosimetry methods, which are primarily used to monitor radiation exposures of occupationally exposed workers conducting licensed activities in the United States. Radiation dosimeters that are no longer widely used, such as film badges and pocket ionization chambers, will not be discussed.

Thermoluminescence Dosimetry

In 1950, Daniels suggested that the thermoluminescence (TL) phenomenon could be used as a radiation dosimeter [3]. This suggestion came late in the development of radiation dosimeters even though it was known that Henri Becquerel, as well as his father, had mentioned this phenomenon in his scientific papers. In addition, the relation between X-ray exposure and thermoluminescence was observed as early as 1904. Nevertheless, after many struggles and failures in the research of Cameron and his colleagues, thermoluminescence and thermoluminescence dosimetry (TLD) became a reality and flourished in the late 1960s and 1970s [3]. For a very long time, TLD has been the most popular method of personnel monitoring.

In these dosimeters, the absorbed dose is determined by observing the emitted light from an inorganic crystal after exposure to radiation. The light is released from the crystal as it is heated under controlled conditions. The heat energy originally was provided by electrical heating, but subsequent developments in TLD led to the use of high-intensity light as an alternate method. Regardless of the method of heating, the amount of light emitted is directly proportional to the radiation energy deposited in the TL material. This light is normally measured with a photomultiplier tube sensitive to the wavelength of the emitted light. It must be remembered that the TLDs are not “absolute dosimeters” and, therefore, require proper calibration in the radiation fields to which the dosimeters will be exposed.

Detailed explanations of the TL phenomenon have been offered by a number of scientists, but a simple bandgap model can be used to explain the basic mechanism. The usual procedure is to refer to the energy-level diagram in an insulating crystal. In a pure crystal, radiation impinging on the crystal would free electrons, and these electrons would pass from the valence band to the conduction band. These electrons would not remain in the conduction band for a long period and would return to the valence band releasing the energy acquired in the form of light. In a pure crystal, this light would be absorbed and would not escape the crystal. In TLDs, dopants (impurities) are added to the crystal, and these impurities reside in the forbidden or bandgap between the valence and conduction bands. When these crystals are exposed to radiation, the loss of electrons from the valence band creates positively charged atoms (“holes”). The electrons and holes may migrate through the crystal until they recombine or are “trapped” by the impurity atoms (dopants) residing in the bandgap. Thus, the energy absorbed by the crystal is stored until it is released, in the form of light, through heating the crystal (thus, thermoluminescence). This light, which is now characteristic of the impurity sites, can escape the crystal and can be measured with an external detector (i.e., a photomultiplier tube).

It is important to realize that these trapping sites may exist at many different levels in the bandgap, and it is not correct to assume that all electrons (or holes) are trapped at exactly the same energy level. Thus, the light intensity may vary as a function of temperature, and the plot of the light intensity as a function of temperature (called a “glow curve”) may exhibit a number of peaks and valleys depending on the number of trapping levels in the crystal. Either the total light emitted or the height of a particular peak may be used to determine the absorbed dose (upon proper calibration). It is also important that the heating cycle be very reproducible to avoid causing fluctuations in the peak heights.

There are a large number of inorganic materials that have been studied for use as TLDs. Table 1 presents a summary of the characteristics of some of the most popular materials (but there are many other possible TLD materials). In dosimetry, it is common to use materials that are “tissue equivalent” in terms of the interactions of photons or other radiations with the dosimeters. Thus, the closer the effective atomic number of the material is to that of tissue (∼7.6), the more tissue equivalent is the material. For historical reasons, LiF is the standard to which all other TLD materials are compared. The standard LiF is the natural form of lithium with the normal concentrations of the isotopes of Li-6 (7.4%) and Li-7 (92.6%). Also in this table are listed the temperatures of the “main peak.” This designation is the peak in the TLD glow curve that is used to determine the absorbed dose. One big disadvantage of TLDs is that the dosimeter can only be read (evaluated) once. Heating the crystal essentially releases all the electrons or holes that are trapped and an opportunity to confirm the reading is not possible.
Table 1

Summary of characteristics of thermoluminescence dosimetry (TLD) materials

TLD material

Effective atomic number (Zeff)

Temperature of main peak (°C)






















aLiF:Mg,Ti is the standard material to which all other TLD materials are compared

Table 2 compares other characteristics of the TLD materials. These include the “light output” of the material when exposed to 60Co radiation compared to the light output for the standard, that is, LiF. The data for Li2B4O7:Mn are somewhat misleading because the measurements quoted in this table were made with the standard photomultiplier tube used for all other TLD materials. However, because the wavelength of light from the Li2B4O7:Mn is different, the output can be improved significantly by replacing the normal photomultiplier with one with a photocathode sensitive to the correct wavelength of light. The energy response is the ratio of the light output at energy of 30 keV to that from irradiation with 60Co. Except for Li2B4O7:Mn, most materials overrespond to low-energy photon radiation.
Table 2

Summary of dosimetric characteristics of TLD materials

TLD material

Efficiency to Co-60

Energy response

Useful dose range





0.2 μGy to 102 Gy

50% in 24 h




0.2 μGy to 103 Gy

2% in 1 month

8% in 6 months




10 μGy to 3 × 103 Gy

10% in 16 h

15% in 2 weeks




0.1 μGy to 104 Gy

10% in 24 h

16% in 2 weeks




10 μGy to 3 × 103 Gy

5% in 1 year




0.5 μGy to 104 + Gy

<5% in 3 months




0.5 μGy to 10 Gy

<3% in 1 year

As can be seen in Table 2, the usual TLD materials are very sensitive to radiation with lower limits of detection in the range of tenths of microgray. Upper limits range from only 10 Gy to more than 104 Gy. The term “fading” is an indication of the ability of the TLD material to retain the stored energy and thus the stored information necessary to assign the absorbed dose from the wearing of the dosimeter. As is shown, LiF:Mg,Ti, Li2B4O7:Mn, and Al2O3:C have good energy storage capability, which has led to a focus on these three materials.

Estimates of Absorbed Doses to Organs of the Body

In the United States, federal regulations require the reporting of three quantities for all occupationally exposed workers who are anticipated to receive doses in excess of 10% of the federal limits. These quantities are the “deep-dose equivalent,” the “eye-dose equivalent,” and the “shallow-dose equivalent.” The deep-dose equivalent is defined as the dose at 1-cm depth in the body, which produces an overestimate of the absorbed doses because most organs and tissues of the body are located deeper than 1 cm (or 1,000 mg/cm2). The eye-dose equivalent considers the dose to the lens of the eye, which is assumed to be at a depth of 300 mg/cm2. Finally, the shallow-dose equivalent (or more properly the skin-dose equivalent) is assumed to be at a depth of 7 mg/cm2.

Now the question arises, “How does one measure these absorbed doses, and the subsequent equivalent doses, with radiation dosimeters located outside the body of the worker?” The approach taken has been used for many years and is not new. It has been applied since the Manhattan Project era and is an accepted method to provide these dose estimates. The technique involves using multiple detector elements, that is, typically four TLDs, and covering these TLDs with different thicknesses of materials (called filters) to represent these depths. So, the TLD designated to measure the deep-dose equivalent is covered with a material having a density thickness of 1,000 mg/cm2. The eye-dose equivalent is determined by covering the TLD with material with a density thickness of 300 mg/cm2. Usually, there are two different materials of this density thickness in the dosimeter. Finally, there is a thin filter included to allow extrapolation to the depth of 7 mg/cm2. It is very difficult to provide a direct measurement at such an extremely shallow depth.

Thermoluminescence Dosimetry for Neutrons

TLDs have their primary application in dosimetry for X-ray and gamma-ray fields. In addition, the TLDs have limited sensitivity to beta radiation. Because certain materials in the TLDs interact with neutrons, TLDs can be used to measure both thermal and fast neutron dose – with proper calibrations. Table 3 lists the pertinent information regarding three types of LiF TLDs as these are applied to neutron dosimetry. As can be seen in this table, the natural LiF TLD has the normal concentrations on Li-6 and Li-7. This material is designated as TLD-100. The other two materials are designated TLD-600 and TLD-700. TLD-600 contains a high concentration of the isotope Li-6 with less that 5% Li-7. TLD-700 contains essentially all the isotope Li-7 with a very small amount of Li-6. Notice also the differences in the thermal neutron cross sections (probability to absorb neutrons) for these two isotopes. These differences play a role in the dosimetry of both thermal and fast neutrons.
Table 3

Characteristics of lithium fluoride TLDs for neutron dosimetry

TLD type

Li-6 (%)

Li-7 (%)

Thermal neutron cross section

Natural (TLD-100)







950 barns




0.033 barns

Thermal neutron dosimetry is based on the “difference technique” used to separate the photon and thermal neutron dose from each other. This technique is similar in some ways to the standard method using bare and cadmium-covered gold foils to measure the thermal neutron fluence in a nuclear reactor core. Basically, the LiF TLD-600 is sensitive to photon radiation as well as to thermal neutron radiation. The LiF TLD-700 has the same photon sensitivity but essentially no sensitivity to thermal neutrons. Thus, when used in a mixed photon and thermal neutron field, the TLD-600 will provide the absorbed dose for both the photons and the thermal neutrons. The TLD-700 will provide only the absorbed dose from the photon radiation, and the difference between the doses indicated by the two dosimeters is the thermal neutron dose.

Fast neutron dosimetry uses a similar technique, but to obtain the fast neutron dose, the “albedo technique” is used. The albedo technique relies upon the dosimeter being held closely to the body, and the fast neutron dose is measured as the fast neutrons enter the body, are moderated there by tissue, are subsequently reflected from the body, and hit the dosimeter. There are many designs of albedo fast neutron dosimeters, but the concept is the same as that outlined above. The fast neutron dose is obtained by using the difference technique as before.

Note that using TLDs to measure either thermal neutron or fast neutron dose requires very careful calibration of the dosimeters in radiation fields approximating those in which the exposures are anticipated. In addition, in very high photon radiation fields with a low percentage of thermal or fast neutrons, these dosimeters may provide data that is highly suspect. This is often the consequence of subtracting two very large numbers (large photon dose) to obtain an estimate of the very low thermal or fast neutron dose. Other methods of neutron dosimetry, for example, track-etch detectors, may be preferred in these situations.

Optically Stimulated Luminescence

Currently, the dosimetry method of choice for dosimetry appears to be optically stimulated luminescence (OSL). Even though film and TLD are still used to some extent, many facilities are switching over to this newer technology. OSL may be used, not only for personnel monitoring but also for environmental monitoring and medical dosimetry. Basically, OSL is very similar to TLD in terms of the basic physics associated with the energy deposition, storage, and release. The major difference is that, instead of using heat, laser light is used to release (“detrap”) the electrons. The laser is pulsed at a rate of 4,000 times per second and is directed to only a small area on the material. This provides an opportunity for multiple readings on the same dosimeter, if necessary, as the laser can be focused on another region of the crystal. In a similar fashion to the development of TLDs in the latter part of the twentieth century, many materials have been studied for possible use as OSL dosimeters. These materials include halides, sulfates, sulfides, and oxides [6].

However, the material of choice is an Al2O3:C crystalline detector. Single crystals of Al2O3:C are ground into a powder and mixed with a polyester base. This mixture is deposited on a polyester film about 0.03 cm thick, which can be fabricated in a thin strip for incorporation into a dosimeter. This material has a good response to photon radiation as well as a response to beta radiation. Copper (0.18 g/cm2) and tin (0.39 g/cm2) filters, as well as an open area, are used in the dosimeter (as described above) to provide the dosimetry quantities of interest. Commercially available dosimeters have a dose measurement range for photons of 1 mrem to 1,000 rem (10 μSv to 10 Sv) over an energy range from 5 keV to more than 40 MeV. For beta radiation, the dose measurement range is from 10 mrem to 1,000 rem (100 μSv to 10 Sv) over an energy range of 150 keV to 10 MeV (average energy). The commercially available OSL dosimeters may be used for up to 1 year. If the packaging is not compromised, the dosimeter is unaffected by heat, moisture, and pressure [7].

Electronic Dosimeters

Currently, in many situations, such as in a nuclear power plant, it is common to wear two types of dosimeters. One of these dosimeters is usually a TLD or an OSL type. This dosimeter is usually designated as the “dosimeter of record.” That is, these dosimeters are worn for long periods of time (i.e., 1 month, 3 months, or perhaps 1 year) and provide a measure of the total dose received by the exposed worker over the wearing period. It is these doses that are reported annually to the US Nuclear Regulatory Commission, as required by the federal regulations. The second dosimeter is a modern, electronic dosimeter that may contain as many as three small detector elements (usually solid-state detectors such as silicon diodes). Electronic dosimeters are used to monitor the work and may be worn for short periods of time. The primary function of these dosimeters is work and dose control. The dosimeters feature adjustable alarm points that may be set by a computer, before use, based on the anticipated total dose received or maximum dose rate encountered. Workers are trained to recognize the alarms and understand the proper response to these alarms.

There are many different electronic dosimeters but most have similar characteristics. A typical dosimeter would use small silicon diode detectors, which would be sensitive to both photons (50 keV to 6 MeV) and beta radiation (>60 keV up to more than 2 MeV). Doses from 0.1 mrem (1 μSv) to 10 Sv can be measured with dose rates ranging from 0.01 mrem/h (0.1 μSv/h) to 1,000 rem/h (10 Sv/h). Most dosimeters feature both audible and visible alarms. Typically, these dosimeters are lightweight, from 50 g up to perhaps 200 g. A unique feature of some of these dosimeters, and a good radiation protection practice, is the use of permanent stations throughout the plant that interrogate the dosimeters as the worker passes by the station and transmits this information to a central station. Other types of dosimeters contain small transmitters that transmit the accumulated dose (or dose rates) to central locations, which are monitored by the radiation safety staff. As technology moves forward, it is difficult to predict what the future holds in terms of the next generation of dosimeters.


The measurement of radiation energy deposited in material, that is, the measurement of the absorbed dose, is the primary goal of the practice of dosimetry. Over the last 100 years or more, dosimetry has taken many forms as science and technology have made significant progress. Many of the techniques have been relegated to the history books as other more advanced techniques have been introduced. This short discussion of dosimetry was intended to present the basic concepts and to provide two examples of modern dosimeters used to monitor personnel that are occupationally exposed to ionizing radiation, as well as to introduce the use of electronic dosimeters, which are used widely in nuclear utilities.

Future Directions

Approaches to dosimetry have changed rapidly with developments in electronics and computers. The last decade or so has seen the design and manufacture of dosimeters that are small but incorporate computer capabilities. These dosimeters allow the setting of dose and dose rate alarms, remote interrogation of the dosimeters to monitor worker exposure, and many other features. It appears these trends will continue as the demand for “smarter” dosimeters, with many more capabilities, for use in nuclear facilities as well as in emergency response continues to increase.


  1. 1.
    Poston JW Sr (1987) Dosimetry. In: Encyclopedia of physical science and technology, vol 6. Academic, New YorkGoogle Scholar
  2. 2.
    Knoll GF (2000) Radiation detection and measurements, 3rd edn. Wiley, New YorkGoogle Scholar
  3. 3.
    Eichholz GG, Poston JW (1979) Principles of nuclear radiation detection. Ann Arbor Science Publishers, Ann ArborGoogle Scholar
  4. 4.
    Tsoulfanidis N (1995) Measurement & detection of radiation, 2nd edn. Taylor & Francis, Washington, DCGoogle Scholar
  5. 5.
    Kase KR, Bjarngard BE, Attix FH (eds) (1985) The dosimetry of ionizing radiation, vol I–III. Academic, New YorkGoogle Scholar
  6. 6.
    Boetter-Jensen L, McKeever SWS, Wintle AG (2000) Optically stimulated luminescence dosimetry. Elsevier, Maryland HeightsGoogle Scholar
  7. 7.
    R. S. Landauer, Inc (2010) Accessed 9 May 2010

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Nuclear EngineeringTexas A&M UniversityCollege StationUSA

Personalised recommendations