Ground-based calibration and characterization of the Fermi gamma-ray burst monitor detectors
- 1k Downloads
One of the scientific objectives of NASA’s Fermi Gamma-ray Space Telescope is the study of Gamma-Ray Bursts (GRBs). The Fermi Gamma-Ray Burst Monitor (GBM) was designed to detect and localize bursts for the Fermi mission. By means of an array of 12 NaI(Tl) (8 keV to 1 MeV) and two BGO (0.2 to 40 MeV) scintillation detectors, GBM extends the energy range (20 MeV to > 300 GeV) of Fermi’s main instrument, the Large Area Telescope, into the traditional range of current GRB databases. The physical detector response of the GBM instrument to GRBs is determined with the help of Monte Carlo simulations, which are supported and verified by on-ground individual detector calibration measurements. We present the principal instrument properties, which have been determined as a function of energy and angle, including the channel-energy relation, the energy resolution, the effective area and the spatial homogeneity.
KeywordsFermi Gamma-Ray space telescope GLAST Gamma-Ray detectors Calibration NaI(Tl) BGO Gamma-Ray burst
PACS95.55.Ka 98.70.Rz 29.40.Mc 07.85.-m 07.85.Fv
The Fermi Gamma-ray Space Telescope (formerly known as GLAST), which was successfully launched on June 11, 2008, is an international and multi-agency space observatory [2, 26] that studies the cosmos in the photon energy range of 8 keV to greater than 300 GeV. The scientific motivations for the Fermi mission comprise a wide range of non-thermal processes and phenomena that can best be studied in high-energy gamma rays, from solar flares to pulsars and cosmic rays in our Galaxy, to blazars and Gamma-Ray Bursts (GRBs) at cosmological distances . Particularly in GRB science, the detection of energy emission beyond 50 MeV [6, 17] still represents a puzzling topic, mainly because only a few observations by the Energetic Gamma-Ray Experiment Telescope (EGRET)  on-board the Compton Gamma-Ray Observatory (CGRO) [12, 15] and more recently by AGILE  are presently available above this energy. Fermi’s detection range, extending approximately an order of magnitude beyond EGRET’s upper energy limit of 30 GeV, will hopefully expand the catalogue of high-energy burst detections. A greater number of detailed observations of burst emission at MeV and GeV energies should provide a better understanding of bursts, thus testing GRB high-energy emission models [5, 25, 30, 35]. Fermi was specifically designed to avoid some of the limitations of EGRET, and it incorporates new technology and advanced on-board software that will allow it to achieve scientific goals greater than previous space experiments.
The main instrument on board the Fermi observatory is the Large Area Telescope (LAT), a pair conversion telescope, like EGRET, operating in the energy range between 20 MeV and 300 GeV. This detector is based on solid-state technology, obviating the need for consumables (as was the case for EGRET’s spark chambers, whose detector gas needed to be periodically replenished) and greatly decreasing (<10 μs) dead time (EGRET’s high dead time was due to the length of time required to re-charge the HV power supplies after event detection). These features, combined with the large effective area and excellent background rejection, allow the LAT to detect both faint sources and transient signals in the gamma-ray sky. Aside from the main instrument, the Fermi Gamma-Ray Burst Monitor (GBM) extends the Fermi energy range to lower energies (from 8 keV to 40 MeV). The GBM helps the LAT with the discovery of transient events within a larger FoV and performs time-resolved spectroscopy of the measured burst emission. In case of very strong and hard bursts, the GRB position, which is usually communicated by the GBM to the LAT, allows a repointing of the main instrument, in order to search for higher energy prompt or delayed emission.
In order to perform the above validations, several calibration campaigns were carried out in the years 2005 to 2008. The calibration of each individual detector (or detector-level calibration) comprises three distinct campaigns: a main campaign with radioactive sources (from 14.4 keV to 4.4 MeV), which was performed in the laboratory of the Max-Planck-Institut für extraterrestrische Physik (MPE, Munich, Germany), and two additional campaigns focusing on the low energy calibration of the NaI detectors (from 10 to 60 keV) and on the high energy calibration of the BGO detectors (from 4.4 to 17.6 MeV), respectively. The first one was performed at the synchrotron radiation facility of the Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung (BESSY, Berlin, Germany), with the support and collaboration of the German Physikalisch-Technische Bundesanstalt (PTB), while the second was carried out at the SLAC National Accelerator Laboratory (Stanford, CA, USA).
Subsequent calibration campaigns of the GBM instrument were performed at system-level, that comprises all flight detectors, the flight Data Processing Unit (DPU) and the Power Supply Box (PSB). These were carried out in the laboratories of the National Space Science and Technology Center (NSSTC) and of the Marshall Space Flight Center (MSFC) at Huntsville (AL, USA) and include measurements for the determination of the channel-energy relation of the flight DPU and checking of the detectors’ performance before and after environmental tests. After the integration of GBM onto the spacecraft, a radioactive source survey was performed in order to verify the spacecraft backscattering in the modeling of the instrument response. These later measurements are summarized in internal NASA reports and will be not further discussed.
This paper focuses on the detector-level calibration campaigns of the GBM instrument, and in particular on the analysis methods and results, which crucially support the development of a consistent GBM instrument response. It is organized as follows: Section 2 outlines the technical characteristics of the GBM detectors; Section 3 describes the various calibration campaigns which have been done, highlighting simulations of the calibration in the laboratory environment performed at MPE (see Section 3.4); Section 4 discusses the analysis system for the calibration data and shows the calibration results. In Section 5, final comments about the scientific capabilities of GBM are given and the synergy of GBM with present space missions is outlined.
2 The GBM detectors
The GBM flight hardware comprises a set of 12 Thallium activated Sodium Iodide crystals (NaI(Tl), hereafter NaI), two Bismuth Germanate crystals (Bi4 Ge3 O13, commonly abbreviated as BGO), a DPU, and a PSB. In total, 17 scintillation detectors were built: 12 flight module (FM) NaI detectors, two FM BGO detectors, one spare NaI detector and two engineering qualification models (EQM), one for each detector type. Since detector NaI FM 06 immediately showed low-level performances, it was decided to replace it with the spare detector, which was consequently numbered FM 13. Note that the detector numbering scheme used in the calibration and adopted throughout this paper is different to the one used for in-flight analysis, as indicated in Table 4 (columns 2 and 3) in the Appendix.
The Hamamatsu R877 photomultiplier tube (PMT) is used for all the GBM detectors. This is a 10-stage 5-inch phototube made from borosilicate glass with a bialkali (CsSb) photocathode, which has been modified (R877RG-105) in order to fulfill the GBM mechanical load-requirements.
The output signals of all PMTs (for both NaIs and BGOs) are first amplified via linear charge-sensitive amplifiers. The preamplifier gains and the HVs are adjusted so that they produce a ~5 V signal for a 1 MeV gamma-ray incident on a NaI detector and for a 30 MeV gamma-ray incident on a BGO detector. Due to a change of the BGO HV settings after launch, this value changed to 40 MeV, thus extending the original BGO energy range . Signals are then sent through pulse shaping stages to an output amplifier supplying differential signals to the input stage of the DPU, which are combined by a unity gain operational amplifier in the DPU before digitizing. In the particular case of BGO detectors, outputs from the two PMTs are divided by two and then added at the preamplifier stage in the DPU.
In the DPU, the detector pulses are continuously digitized by a separate flash ADC at a speed of 0.1 μs. The pulse peak is measured by a separate software running in a Field Programmable Gate Array (FPGA). This scheme allows a fixed energy independent commendable dead-time for digitization. The signal processor digitizes the amplified PMT anode signals into 4096 linear channels. Due to telemetry limitations, these channels are mapped (pseudo-logarithmic compression) on-board into (1) 128-channel resolution spectra, with a nominal temporal resolution of 4.096 s (Continuous High SPECtral resolution or CSPEC data) and (2) spectra with a poorer spectral resolution of eight channels and better temporal resolution of 0.256 s (Continuous high TIME resolution or CTIME data) by using uploaded look-up tables.1 These were defined with the help of the on-ground channel-energy relations (see Section 4.2). Moreover, time-tagged event (TTE) data are continuously stored by the DPU. These data consist of individually digitized pulse height events from the GBM detectors which have the same channel boundaries as CSPEC and 2 μs resolution. TTE data are transmitted only when a burst trigger occurs or by command. More details on the GBM data type as well as a block diagram of the GBM flight hardware can be found in .
Besides processing signals from the detectors, the DPU processes commands, formats data for transmission to the spacecraft and controls high and low voltage (HV and LV) to the detectors. Changes in the detector gains can be due to several effects, such as temperature changes of the detectors and of the HV power supply, variations in the magnetic field at the PMT, and PMT aging. GBM adopts a technique previously employed on BATSE, that is Automatic Gain Control (AGC). In this way, long timescale gain changes are compensated by the GBM flight software by adjusting the PMT HV to keep the background 511 keV line at a specified energy channel.
3 Calibration campaigns
To enable the location of a GRB and to derive its spectrum, a detailed knowledge of the GBM detector response is necessary. The information regarding the detected energy of an infalling gamma-ray photon, which is dependent on the direction from where it entered the detector, is stored into a response matrix. This must be generated for each detector using computer simulations. The actual detector response at discrete incidence angles and energies has to be measured to verify the validity of the simulated responses. The complete response matrix of the whole instrument system (including LAT and the spacecraft structure) is finally created by simulation of a dense grid of energies and infalling photon directions using the verified simulation tool .
Properties of radioactive nuclides used for NaI and BGO calibration campaigns: (1) Half-lives in years (y) or days (d); (2) Decay type producing the gamma-ray (γ) or X-ray (e.g. K and L) radiation − For nuclides which are part of decay chains, the daughter nuclides producing the corresponding radiation are also given; (3) Line energies in keV; (4) Photon-emission probabilities for the corresponding decays
(2) Line origin
(3) Line Energies (keV)
(4) Transition Probability
432.2 (7) y
Due to the lack of radioactive sources producing lines below 60 keV and in order to study spatial homogeneity properties of NaI detectors, a dedicated calibration campaign was performed at PTB/BESSY. Here, four NaI detectors (FM 01, FM 02, FM 03 and FM 04)2 were exposed to a monochromatic X-ray beam with energy ranging from 10 to 60 keV, and the whole detector’s surface was additionally raster-scanned at different energies with a pencil beam perpendicular to the detector’s surface.
In order to extend the BGO calibration range, another dedicated calibration campaign was carried out at the SLAC laboratory. Here, the BGO EQM detector3 was exposed to three gamma-ray lines (up to 17.6 MeV) produced by the interaction of a proton beam of ~340 keV, generated with a small Van-de-Graaff accelerator, with a LiF-target. A checklist showing which detectors were employed at each detector-level calibration campaign is given in Table 4 (columns 4 to 6).
3.1 Laboratory setup and calibration instrumentation at MPE
The determination of the angular response of the detectors was achieved in the following way. The center of the NaI detector calibration coordinate system was chosen at the center of the external surface of the Be-window of the detector unit, with the X axis pointing toward the radioactive source, the Y axis pointing toward left, and Z axis pointing up (see Fig. 8, left panel). The detectors were mounted on a specially developed holder in such a way that the front of the Be-window was parallel to the Y/Z plane (if the detector is pointed to the source; i.e. 0° position) and so that detectors could be rotated around two axes in order to achieve all incidence angles of the radiation. The detector rotation axes were the Z-axis (Azimuth) and around the X-axis (roll). For BGO detectors, the mounting was such that the very center of the detector (center of crystal) was coincident with the origin of the coordinate system and the 0° position was defined as the long detector axis coincident with the Y-axis. The BGO detectors were only rotated around the Z-axis, and no roll angles were measured in this case.
3.2 NaI low-energy calibration at PTB/BESSY
The calibration of the NaI detectors in the low photon energy range down to 10 keV was performed with monochromatic synchrotron radiation with the support of the PTB. A pencil beam of about 0.2 ×0.2 mm 2 was extracted from a wavelength-shifter beamline, the “BAMline” , at the electron storage ring BESSY II, which is equipped with a double-multilayer monochromator (DMM) and a double-crystal monochromator (DCM) . In the photon energy range from 10 keV to 30 keV DCM and DMM were operated in series to combine the high resolving power of the DCM with the high spectral purity of the DMM. Above 30 keV, a high spectral purity with higher order contributions below 10 − 4 was already achieved by the DCM alone. The tunability of the photon energy was also used to investigate the detectors in the vicinity of the Iodine K-edge at 33.17 keV.
The effective area of the detectors as a function of the photon energy was determined by scanning the detectors at discrete locations in x- and y-direction over the active area while the pencil beam was fixed in space. During the scan, the intensity was monitored with a photodiode operated in transmission. The effective area is just the product of the average QDE and the active area. In addition, the spatial homogeneity of the QDE was determined by these measurements (see Section 4.4).
The measurements presented in this paper were recorded at 18 different energies, namely from 10 to 20 kev in 2 keV steps, from 30 to 37 keV in 1 keV steps and at 32.8, 40, 50 and 60 keV. These accurate measurements allowed to exactly determine the low-energy behavior of the channel-energy relation of the NaI detectors (see Section 4.2.1) and to fine tune the energy range around the Iodine K-edge at 33.17 keV (see Section 4.2.2). Moreover, three rasterscans of the detector’s surface were performed at 10, 36 and 60 keV in order to study the detectors’ spatial homogeneity (see Section 4.5 for more details).
3.3 BGO high-energy calibration at SLAC
Contribution of simulated laboratory components to the detected photons
Scattered rad. total
For performing the measurements, the EQM detector was placed as close as possible to the LiF-target at an angle of ~45° with respect to the proton-beam line, in order to guarantee a maximized flux of the generated gamma-rays. Unfortunately, measurements for the determination of the detector’s effective area could not be obtained, since the gamma-ray flux was not closely monitored.
3.4 Simulation of the laboratory and the calibration setup at MPE
In order to simulate the recorded spectra of the calibration campaign at MPE to gain confidence in the simulation software used, a very detailed model of the environment in which the calibration took place had to be created. The detailed modeling of the laboratory was necessary as all scattered radiation from the surrounding material near and far had to be included to realistically simulate all the radiation reaching the detector.4 Background measurements with no radioactive sources present were taken to subtract the ever-present natural background radiation in the laboratory. However, the source-induced “background” radiation created by scattered radiation of the non-collimated radioactive sources had to be included in the simulation to enable a detailed comparison with the measured spectra.
Additional simulations of the other calibration campaigns, in particular for the PTB/ BESSY one, are planned. In the case of SLAC measurements, the simulation tools were only used to determine the ratios between full-energy peaks and escape peaks (see Fig. 7, right panel): no further simulation of the calibration setup is foreseen.
4 Calibration data analysis and results
4.1 Processing of calibration runs
During each calibration campaign, all spectra measured by the GBM detectors were recor-ded together with the information necessary for the analysis. Shortly before or after the collection of data runs, additional background measurements were recorded for longer periods. Every run was then normalized to an exposure time of 1 h, and the background was subsequently subtracted from the data. In the case of measurements performed at PTB/BESSY, natural background contribution could be neglected due to the very high beam intensities and to the short measurement times.
NaI spectra from radioactive sources recorded at MPE are shown in Fig. 10. A detailed description of the full-energy peaks characterizing every source is given in Section 4.1.1. Beside full-energy and Iodine escape peaks, spectra from high-energy radioactive lines show more features (i.e. see the 137Cs spectrum in panel f ), such as the low-energy X radiation (due to internal scattering of gamma-rays very close to the radioactive material) at the very left of the spectrum, the Compton distribution, which is a continuous distribution due to primary gamma-rays undergoing Compton scattering within the crystal, and a backscatter peak at the low-energy end of the Compton distribution.
Similarly, BGO spectra from radioactive sources collected at MPE and SLAC with detector FM 02 and BGO EQM are presented in Figs. 11 and 12. The spectrum produced by the Van-de-Graaff proton beam at SLAC, which was measured by the spare detector BGO EQM, is shown in panel c of Fig. 12.
4.1.1 Analysis of the full-energy peak
Fit constraints adopted for the analysis of some double peaks for NaI and BGO detectors
Double line energy (keV)
22.1 − 25
32.06 − 36.6
122.06 − 136.47
1173.23 − 1332.49
5619 − 6130
14075 − 14586
17108 − 17619
An important consideration when fitting mathematical functions to these data is that the calculated statistical errors of the fit parameters are always within 0.1% in the case of line areas and FWHM, or even 0.01% in the case of line-centers. Such extreme precisions cause very high chi-square values in subsequent analysis, as in the determination of the channel-energy relation, which extends over an entire energy decade in the case of NaI detectors. Moreover, it was noticed that by slightly changing the initial fitting conditions, such as the region of interest around the peak or the type of background, parameter values suffered from substantial changes with respect to a precedent analysis. This effect is particularly strong in the analysis of multiple peaks, were more Gaussians and background functions are added and the number of free parameters increases. In order to account for this effects and to get a more realistic evaluation of the fit parameter errors, we decided to analyse several times one spectrum per source (measured at normal incidence by detectors NaI FM 04 and BGO FM 02), each time putting different initial fitting conditions. This procedure was repeated several times (usually ~10–20 times, i.e. until the systematic contribution was not further increasing and a good chi-square value of the individual fit was produced), thus obtaining a dataset of fit parameters and respective errors. For each error dataset, standard deviations (σ) were calculated, resulting in values of the order of 1% for line areas and FWHMs and of 0.1% for line-centers, and were finally added to the fit error, thus obtaining realistic errors.
4.2 Channel-energy relation
4.2.1 NaI nonlinear response
The addition of measurements taken at PTB/BESSY with four NaI detectors (see Section 3.2) for computing the NaI response is particularly necessary in the region around the K-edge energy, since the radioactive sources used at MPE only sample it with four lines, three of which (22.1, 25 and 32.06 keV) belong to double peaks and the first line from 57Co at 14.41 keV shows asymmetries and broadening (see Fig. 13, panel a). From the collected PTB/BESSY data, line fitting results were obtained for 19 spectra collected at energies between 10 and 60 keV. Corrections of gain settings between the detectors during the two different calibration campaigns were carefully taken into account.
4.2.2 The iodine K-edge region
4.2.3 Simulation validation
Unbroadened lines represent good guidelines to check the exact position of the full-energy and escape peaks. A good example is the high-energy double peak of 57Co (Fig. 20, panel f), where simulations confirm the position of both radioactive lines at 122.06 and 136.47 keV, and the presence of the Iodine escape peak around ~90 keV. Still, some discrepancies are evident, e.g. at lower energies. One likely cause of the discrepancies below 60 keV, mostly resulting in a higher number of counts of the simulated data compared to the real data, and which is particularly visible for the 57Co line at 14.41 keV (Fig. 20, panel a), is the uncertainty about the detailed composition of the radioactive source. Indeed, radioactive isotopes are contained in a small (1 mm) sphere of “salt”. Simulations including this salt sphere were performed and a factor of 3.8 difference in the perceived 14.41 keV line strength was found. The true answer lies somewhere between this and the simulation with no source material, as salt and radioactive isotope are mixed. For the general calibration simulation, a pointsphere of radioactive material not surrounded by salt had been used. Another possible explanation could probably be the leakage of secondary electrons from the surface of the detector leading to a less-absorbed energy. Further discrepancies at higher energies, which are visible e.g. in panels d and e of Fig. 21 are smaller than 1%.
4.2.4 BGO response
The determination of a channel-energy relation for the BGO detectors required the additional analysis of the high-energy data taken with the EQM module at SLAC to cover the energy domain between 5 MeV and 20 MeV. In order to combine the radioactive source measurements made with the BGO FMs at MPE and the proton beam induced radiation measurements made with the BGO EQM at SLAC (see Section 3.3), it was necessary to take into account the different gain settings by application of a scaling factor. The scaling factor was derived by comparing 22Na and Am/Be measurements (at 511, 1274, and 4430 keV) which were done at both sites. Due to the very low statistics in the measurements from the high-energy reaction of the Van-de-Graaf beam on the LiF target (Eq. 2), first and second electron escape peaks from pair annihilation of the 14.6 MeV line could not be considered in this analysis (see also Fig. 16, panel f). They were mainly used as background reference points in order to help finding the exact position of the 17.5 MeV line. In this way, a dataset of 23 detected lines was available for determining the BGO EQM channel-energy relation.
4.3 Energy resolution
4.4 Full-energy peak effective area and angular response
4.5 Quantum detection efficiency (QDE) and spatial uniformity of NaI detectors
The detector’s spatial homogeneity was investigated at PTB/BESSY by means of rasterscans of detector NaI FM 04 at three distinct energies, namely 10, 36 and 60 keV. During each rasterscan, ~ 700 runs per detector were recorded with a spacing of 5 mm, and for each one the full-energy peak was analysed as previously described in Section 4.1.1. For the rastercsan at 10 keV, the dependence of line-center (in channel #) and line resolution (in %) on the rasterscan position in mm (DET_X and DET_Y) is shown in Fig. 28, bottom panels. Results for the line area are not further shown, since the QDE plots previously discussed in Fig. 28, top panels, already reflect the surface behavior for this parameter).
From the line center spatial dependence one can notice that some border effects appear toward the edge of the NaI crystal, shifting the full-energy peak to lower channel numbers i.e. energies. This effect is of the order of 12% at 10 and 60 keV and of 7% at 36 keV. The line resolution is homogeneous over the whole detector’s area, with a mean value of 25%, 15% and 10% at 10, 36 and 60 keV, respectively. While the first two resolutions are comparable to the results obtained with radioactive sources at 14.4 and 36.6 keV, the 60 keV rasterscan gives an improved resolution when compared to the result of 15% obtained with the 241Am source at 59.4 keV (see Fig. 13, panel f).
After the successful launch of the Fermi mission and the proper activation of its instruments, the 14 GBM detectors started to collect scientific data. The spectral overlap of the two BGO detectors (0.2 to 40 MeV) with the LAT lower limit of ~ 20 MeV opens a promising epoch of investigation of the high-energy prompt and afterglow GRB emission in the yet poorly explored MeV-GeV energy region.
On ground, the angular and energy response of each GBM detector was calibrated using various radioactive sources between 14.4 keV and 4.4 MeV. The channel-energy relations, energy resolutions, on- and off-axis effective areas of the single detectors were determined. Additional calibration measurements were performed for NaI detectors at PTB/BESSY below 60 keV and for BGO detectors at SLAC above 5 MeV, thus covering the whole GBM energy domain. As already mentioned in the introduction, further calibration measurements at system level and after integration onto the spacecraft were carried out (see Table 4). All those measurements crucially contribute to the validation of Monte Carlo simulations of the direct GBM detector response. These incorporate detailed models of the Fermi observatory, including the GBM detectors, instruments, and all in-flight spacecraft components, plus the scattering of gamma-rays from a burst in the spacecraft and in the Earth’s atmosphere . The response as a function of photon energy and direction is finally captured in a Direct Response Matrix (DRM) database, allowing the determination of the true gamma-ray spectrum from the measured data.
The results reported in this paper directly contribute to the DRM final determination, and they fully follow physical expectations . Measurements and fit results for two sample detectors (NaI FM 04 and BGO FM 02) are given in Tables 5 and 6, respectively (see Appendix). Fit parameters for the energy-channel relations and energy resolutions of those detectors are always reported in the captions of the corresponding plots (see Figs. 18, 22 and 23). It is worth noting that these parameters reflect the characteristics of all other NaI and BGO detectors not reported in this paper, thus showing that all detectors behave the same within statistics.
The channel-energy relation parameters obtained throughout this analysis are not directly used for in-flight calculations because of the different electronic setup which was used during detector level calibration and in-flight (see Section 3.1). The same analysis method described in Section 4.1.1 and the same fitting procedures of Section 4.2 were adopted to analyze data collected during system level calibration. These results confirm that the systematic uncertainties in the channel-energy conversions arise from the discussed sources, i.e. fitting procedure, limited statistics in the case of high-energy lines of BGOs, electronics and non-uniform responses of detectors, and are fully consistent with measurements presented in this paper.
The GBM detectors will play an important role in the GRB field in the next decade. The unprecedented synergy between the GBM and the LAT will allow to observe burst spectra covering ~ 7 decades in energy. Moreover, simultaneous observations by the large number of gamma-ray burst detectors operating in the Fermi era will complement each other. A nice overview of the currently operating space missions can be found in Table 1 of . Here, instrument characteristics such as FOVs, effective areas, localization uncertainties and energy bands are compared. The GBM detectors fit in this overall picture by providing a higher trigger energy range (50–300-keV) than e.g. Swift-BAT  (15–150 keV) and a spectral coverage up to 40 MeV, an energy limit which can only be investigated with the LAT and the Mini-Calorimeter on-board AGILE . New insights into the GRB properties are therefore expected from GBM, thus advancing the study of GRB physics.
The temporal resolution of CTIME and CSPEC data is adjustable: nominal integration times are decreased when a trigger occurs.
Detectors were delivered to MPE for detector level calibration in batches of four, and shortly there after shipped to the US for system level calibration. Therefore, as the PTB/BESSY facility was only available for a short time, only one batch of NaIs could be calibrated there.
The BGO flight modules were not available for calibration at the time of measurements, since they had already been shipped for system integration.
An important argument driving the decision not to use a collimator for measurements with radioactive sources was the fact that the simulation of the laboratory environment with all its scattering represented a necessary and critical test for the simulation software, which later had to include the spacecraft .
In this case, simulations were not based on measurements performed with detector FM 04, because at that time detector FM 12 was the first to have a complete set of collected spectra.
Calibrated radioactive sources were delivered by AEA Technology QSA GmbH (Braunschweig, Germany) together with a calibration certificate from the Deutscher Kalibrierdienst (DKD, Calibration laboratory for measurements of radioactivity, Germany).
In this case, the double line was fitted with a single Gaussian, since the response dramatically drops above 90° and the fit algorithm is not capable of identifying two separate components.
- 4.Thomson, A., Vaughan D. (eds.): X-ray data booklet. Lawrence Berkeley National Laboratory, Berkeley, CA. http://xdb.lbl.gov (2001). Accessed 3 December 2008
- 8.GBM Proposal, vol. I. Science Investigation and Technical Description. http://gammaray.msfc. nasa.gov/gbm/publications/proposal/ (1999). Accessed 3 December 2008.
- 11.GLAST Science Brochure: Exploring Nature’s Highest Energy Processes with the Gamma-Ray Large Area Space Telescope, Document ID: 20010070290; Report Number: NAS 1.83:9-107-GSFC, NASA/NP-2000-9-107-GSFC (2001). Accessed 3 December 2008.Google Scholar
- 22.Knoll, G.F.: Radiation Detection and Measurement, 3rd edn. Wiley, New York (1989)Google Scholar
- 31.Schötzig, U., Schrader, H.: Halbwertszeiten und Photonen-Emissionswahrscheinlichkeiten von häufig verwendeten Radionukliden, PTB-Bericht PTB-Ra-16/5, 5th edition (1998)Google Scholar
- 34.Wallace, M., et al.: Full spacecraft source modeling and validation for the GLAST burst monitor. AIPC 921, 58-59 (2007)Google Scholar