6.1 Introduction, Definitions

In particle physics, calorimetry refers to the absorption of a particle and the transformation of its energy into a measurable signal related to the energy of the particle. In contrast to tracking a calorimetric measurement implies that the particle is completely absorbed and is thus no longer available for subsequent measurements.

If the energy of the initial particle is much above the threshold of inelastic reactions between this particle and the detector medium, the energy loss process leads to a cascade of lower energy particles, in number commensurate with the incident energy. The charged particles in the shower ultimately lose their energy through the elementary processes mainly by ionization and atomic level excitation. The neutral components in the cascade (γ, n,..) contribute through processes described later in this section.

The sum of the elementary losses builds up the calorimetric signal, which can be of ionization or of scintillation nature (or Cherenkov) or sometimes involve several types of response.

While the definition of calorimetry applies to both the low energy case (no showering) and the high energy case (showering), this section deals mostly with the showering case. Examples of calorimetry without showering are discussed in Sect. 6.2.3.

Only electromagnetic and strong interactions contribute to calorimetric signals, the weak (and gravitational) interaction being much too small to contribute. Particles with only weak (or gravitational interaction) will escape direct calorimetric detection. An exception are the neutrino detectors discussed in Sect. 6.4: statistically, when a very large number of neutrinos cross a detector, a tiny fraction of them will interact (weakly) with matter and will lead to particle production which can be measured by different methods, including calorimetry.

The measurement of the energy of a particle is the primary goal of calorimetry. In addition, several other important quantities can be extracted, such as impact position and timing, particle direction and identification. These issues are considered in Sects. 6.46.6, before addressing specific examples in Sect. 6.7.

In Sect. 6.2 the fundamentals of calorimetry are presented, followed by a discussion of signal formation obtained from the energy deposition (Sect. 6.3).

In recent years, calorimetry in the ATLAS and CMS detectors at the LHC played an essential role in the discovery of the Higgs boson, announced in July 2012.

6.2 Calorimetry: Fundamental Phenomena

Given the large differences between electromagnetic interactions and strong interactions, the following subsections start with electrons and photons, which have only electromagnetic interactions (see however the end of this section), before addressing the case of particles with strong interactions, also called hadrons. The case of muons is considered in a separate subsection.

Fig. 6.1
figure 1

Photon radiation from electron interaction with a nucleus (A, Z)

6.2.1 Interactions of Electrons and Photons with Matter

Several elementary interaction processes of the electrons with the medium contribute to the energy loss −dE of an electron of energy E after a path dx in a medium: Møller scattering, ionization and scattering off the nuclei of the medium: bremsstrahlung (Fig. 6.1). Electron-electron scattering is considered as ionization (Møller) if the energy lost is smaller (larger) than m ec 2/2. It is customary to include in energy loss by ionization atomic excitations, some of which lead to light emission (scintillation). For positrons the Møller scattering is replaced by Bhabha scattering.

The calculated average energy loss is shown in Fig. 6.2 for copper and the average fractional energy loss (−1/E dE/dx) is plotted in Fig. 6.3 for lead [1].

Fig. 6.2
figure 2

Average energy loss of electrons in copper by ionization and bremsstrahlung. Two definitions of the critical energy (E and ε (Rossi)) are shown by arrows

Fig. 6.3
figure 3

Relative energy loss of electrons and positron in lead with the contributions of ionization, bremsstrahlung, Møller (e) and Bhabha (e+) scattering and positron annihilation

Figure 6.2 illustrates that the average energy lost by electrons (and positrons see Fig. 6.3) by ionization is almost independent of their incident energy (above ~1 MeV), with however a small logarithmic increase. For electrons [1, 2].

$$ \begin{array}{lll}&\dfrac{-\mathrm{dE}}{\mathrm{dx}}=\mathrm{k}\dfrac{\mathrm{Z}}{\mathrm{A}}\dfrac{1}{\upbeta^2}\left\{\mathrm{ln}\dfrac{\upgamma {\mathrm{m}}_{\mathrm{e}}{\mathrm{c}}^2\upbeta \sqrt{\upgamma -1}}{\mathrm{I}\sqrt{2}}+\dfrac{1}{2}\left(1-{\upbeta}^2\right)\right.\\ &\hspace*{65pt}\left.\quad-\dfrac{2\upgamma -1}{2{\upgamma}^2}\mathrm{ln}2+\dfrac{1}{16}{\left(\dfrac{\upgamma -1}{\upgamma}\right)}^2\right\}\left(\mathrm{MeV}/\left(\mathrm{g}/{\mathrm{c}\mathrm{m}}^2\right)\right)\end{array} $$
(6.1)

and for positrons

$$ \begin{array}{lll}-\dfrac{\mathrm{dE}}{\mathrm{dx}}&=\mathrm{k}\dfrac{\mathrm{Z}}{\mathrm{A}}\dfrac{1}{\upbeta^2}\left[\mathrm{ln}\dfrac{\upgamma {\mathrm{m}}_{\mathrm{e}}{\mathrm{c}}^2\upbeta \sqrt{\upgamma -1}}{\mathrm{I}\sqrt{2}}\right.\\ &\left.\hspace*{40pt}-\dfrac{\upbeta^2}{24}\left(23+\dfrac{14}{\upgamma +1}+\dfrac{10}{{\left(\upgamma +1\right)}^2}+\dfrac{4}{{\left(\upgamma +1\right)}^3}\right)\right]\left(\mathrm{MeV}/\left(\mathrm{g}/{\mathrm{c}\mathrm{m}}^2\right)\right)\end{array} $$
(6.2)

In these formula, A (Z) are the number of nucleons (protons) in the nuclei of the medium, I is the mean excitation energy of the medium—often approximated by 16 Z 0.9 eV—the constant k = 4πN Ar e 2m ec 2 = 0.3071 MeV/(g/cm2), N A the Avogadro number and \( {\mathrm{r}}_{\mathrm{e}}=\frac{1}{4{\uppi \varepsilon}_0}\cdot \frac{{\mathrm{e}}^2}{{\mathrm{m}}_{\mathrm{e}}{\mathrm{c}}^2} \) = 2.818 10−15 m the classical radius of the electron.

For positrons the annihilation with an electron of the medium has to be considered. The cross section of this process (σ an = Zπ r e 2/γ for γ > > 1) decreases rapidly with increasing energy of the positron. At very low energy, the annihilation rate is:

$$ R= NZ\ \uppi\ {\mathrm{r}_{\mathrm{e}}}^2\ c\ \left[{\mathrm{s}}^{-1}\right], $$
(6.3)

with N = ρ N A/A, the number of atoms per unit volume.

This rate corresponds to a lifetime in lead of about 10−10s [3]. Positron annihilation plays a key role in some technical applications (Positron Emission Tomography, Chap. 7).

Figure 6.2 shows that the average energy loss by bremsstrahlung (photon emission in the electromagnetic field of a nucleus) increases almost linearly as a function of incident energy (meaning that the fractional energy loss is almost constant, as shown in Fig. 6.3).

This is described by introducing the radiation length X 0 defined by:

$$ -\mathrm{d}E/E=\mathrm{d}x/{X}_0 $$
(6.4)

It follows from the definition that X 0 is the mean distance after which an electron has lost, by radiation, all but a fraction 1/e of its initial energy. X 0 also has a simple meaning in terms of photon conversion (see below).

While X 0 should show a small increase at low energy corresponding to a small drop in the fractional energy loss visible in Fig. 6.3, it soon reaches a high-energy limit which has been calculated by Bethe and Heitler [3, 4] and more recently by Tsai [5] and tabulated by Dahl [1] for different materials. In the seminal book by Rossi [6] the formula for X 0, based on the Bethe–Heitler formalism reads:

$$ 1/{X}_0=4\ \alpha\ \left({N}_{\mathrm{A}}/A\right)\ \left\{Z\left(Z+1\right){r_{\mathrm{e}}}^2\ln \left(183{Z}^{-1/3}\right)\right\}\ \left[{\mathrm{cm}}^2\ {\mathrm{g}}^{-1}\right] $$
(6.5)

The Z 2 term reflects the fact that the bremsstrahlung results from a coupling of the initial electron to the electromagnetic field of the nucleus, somewhat screened by the electrons (log term), and augmented by a direct contribution from the electrons (Z 2 replaced by Z (Z + 1)).

The radiation length of a compound, or mixture, can be calculated using:

$$ 1/{X}_0=\Sigma\ {w}_{\mathrm{j}}/{X}_{\mathrm{j}} $$
(6.6)

where the w j are the fractions by weight of the nuclear species j of the mixture or of the compound.

The spectrum of photons with energy k radiated by an electron of energy E traversing a thin slab of material (expressed as a function of y = k/E) has the characteristic “bremsstrahlung” spectrum:

$$ \mathrm{d}\sigma /\mathrm{d}k=A/\left({X}_0{N}_{\mathrm{A}}k\right)\cdotp \left(4/3\hbox{--} 4/3\mathrm{y}+{\mathrm{y}}^2\right). $$
(6.7)

At very high energies a number of effects, considered at the end of this subsection, modify the spectrum.

Another important quantity, the critical energy can be introduced examining Fig. 6.2. The critical energy E c for electrons (or positrons) in a given medium is defined as the energy at which energy loss by radiation in a thin slab equals the energy loss by ionization. A slightly different definition ε 0, introduced by Rossi, results from considering the relative energy loss as fully independent of energy (see Fig. 6.2). The critical energy ε 0 is well described in dense materials (see Fig. 6.4) by:

$$ {\varepsilon}_0=610\ \mathrm{MeV}/\left(Z+1.24\right). $$
(6.8)
Fig. 6.4
figure 4

Critical energy for the chemical elements, using Rossi’s definition [6]. The fits shown are for solids and liquids (solid line) and gases (dashed line)

As will be seen below, X 0 and E c (or ε 0) are among the important parameters characterizing the formation of electromagnetic showers.

Several processes contribute to the interaction of photons with matter, the relative importance of which depends primarily on their energy.

Pair Production

This process is dominant as soon as photon energies are above a few times 2 m ec 2. The graph responsible of the process (Fig. 6.5) shares the vertices of the bremsstrahlung graph.

Fig. 6.5
figure 5

Electron-positron pair creation in the field of a nucleus (A, Z)

The dominant part (Z 2) is due to the nucleus, while the electrons contribute proportionally to Z. The process of pair production has been studied in detail [7]. The pair production cross section can be written, in the complete screening limit at high energy as:

$$ \mathrm{d}\sigma /\mathrm{d}x=A/\hspace*{-3pt}\left({X}_0{N}_{\mathrm{A}}\right)\cdotp \left(1\hbox{--} 4/3x\ \left(1-x\right)\right), $$
(6.9)

where x = E/k is the fraction of the photon energy k taken by the electron of the pair. Integrating the cross section over E gives the pair production cross section:

$$ \vspace*{-2pt}\sigma =7/9\ A/\hspace*{-3pt}\left({X}_0{N}_{\mathrm{A}}\right). \vspace*{-2pt}$$
(6.10)

After 9/7 of an X 0, the probability that a high-energy photon survives without having materialized into an electron-positron pair is 1/e. In the pair production process the energy of the recoil nucleus is small, typically of the order of m ec 2, implying that at high photon energy (k >> m ec 2) the electron and the positron are both collinear with the incident photon. When the reaction takes place with an electron, the momentum transfer can be much higher leading to “triplets” with one positron and two electrons in the final state.

As for bremsstrahlung the cross section is affected at very high energy by processes considered later.

Compton Effect

The QED cross-section for the photon-electron scattering (Klein-Nishina [8]) can be written in the limit of k >> m ec 2, using x = k/m ec 2,

$$ \sigma =\uppi {r_{\mathrm{e}}}^2\left(\log\ 2x+1/2\right)\hspace*{-3pt}/x\ \left[{\mathrm{cm}}^2\right]. $$
(6.11a)

The related probability for Compton scattering after the traversal of a material slab of thickness dt and mass per unit volume ρ is:

$$ \phi =\sigma \rho\ {N}_{\mathrm{A}}\ Z/A\ \mathrm{d}t. $$
(6.11b)

For high Z (e.g. lead) the maximum of the Compton cross section and the pair production cross-section are of the same order of magnitude, while for lighter materials the maximum of the Compton cross section is higher. This is illustrated in Fig. 6.6 (from [1]) where carbon and lead are compared.

Fig. 6.6
figure 6

Photon total cross section as a function of the photon energy in carbon and lead, with the contributions of different processes. σ p.e. corresponds to the atomic photoelectric effect and κ nuc (κ e) corresponds to pair production in the nuclear (electron) field

The differential Compton cross-section, with θ denoting the scattering angle between the initial and final photon, and η the angle between the vector perpendicular to the scattering plane and the polarization vector of the initial photon (in case it is linearly polarized) reads (ε being the ratio between the scattered and the incident photon energy ε = 1/(1 + k/m ec 2(1 − cosθ)):

$$ \mathrm{d}\sigma /\mathrm{d}\varOmega =0.5\ {r}_{\mathrm{e}}^2\ \left(\varepsilon +1/\varepsilon -2\ {\sin}^2\theta\ {\cos}^2\eta \right). $$
(6.12)

At low energy (k not larger than a few MeV), the η-dependence can be exploited for polarization measurements (Compton polarimetry). In the same energy range the probability of backward scattering is also sizeable.

Photoelectric Effect

For sufficiently low photon energies the atomic electrons can no longer be considered as free. The cross section for photon absorption, followed by electron emission (photoelectric effect) presents discontinuities whenever the photon energy crosses the electron binding energy of a deeper shell.

Explicit calculations [4] show that above the K-shell the cross section decreases like E −3.5.

In the section devoted to shower formation, the relevance of the photoelectric effect will be considered. The coherent scattering (or Rayleigh scattering) is comparatively smaller than the photoelectric effect and its role negligible for shower formation.

High Energy Effects (LPM)

In the collinear approximation of bremsstrahlung, the longitudinal momentum difference q || between the initial electron (energy E) and the sum of the final electron and photon (energy k) is equal to

$$ {q}_{||}={m_{\mathrm{e}}}^2{c}^3k/2E\left(E-k\right). $$
(6.13)

This quantity can be extremely small, being for example 0.002 eV/c for a 25 GeV electron radiating a 10 MeV photon. Such a small longitudinal momentum transfer implies a large formation length, L f (L fq || ≥ h/2π), about 100 μm in the above example. Secondary interactions (like multiple scattering) taking place over this distance will perturb the final state and will in general diminish the bremsstrahlung cross section and the pair production cross section in case of photon interactions. Coherent interaction of the produced photons with the medium (dielectric effect) also affects, and reduces, the bremsstrahlung cross-section.

Such effects, already anticipated by Landau and Pomeranchuk [9] were considered in detail by several authors, and were measured by the experiment E146 at SLAC. A recent overview is given in [10]. The high k/E part of the bremsstrahlung spectrum is comparatively less affected (because of much larger q || values) while the low k/E part is significantly influenced for E above ~100 GeV, see Fig. 6.7. Only at much higher energies (>10 TeV) is the pair production cross-section affected.

Fig. 6.7
figure 7

Normalized Bremsstrahlung cross-section k dσ/dk in lead as a function of the fraction of momentum taken by the radiated photon

In crystalline media the strong intercrystalline electrical fields may result in coherent suppression or enhancement of bremsstrahlung. Net effects depend on the propagation direction of the particle with respect to the principal axes of the crystal [11].

Hadronic Interactions of Photons

Photons with energies above a few GeV can behave similarly to Vector Mesons (ρ, ω and ϕ) with the same quantum numbers and in this way develop strong interactions with hadronic matter. They can be parameterized with the Vector Meson Dominance model. Using the Current-Field Identity [12], the amplitude for interactions of virtual photons γ of transverse momentum q is:

$$ \begin{array}{lll}\mathcal{A}\left(\upgamma ^\ast \mathrm{A}\to \mathrm{B}\right)&=\left(e/2{\gamma}_{\uprho}\right)\ {{\mathrm{m}}^2}_{\uprho}/\left({{\mathrm{m}}^2}_{\uprho}-{q}^2\right)\mathcal{A}\left(\uprho \mathrm{A}\to \mathrm{B}\right)\\ &\quad+\mathrm{equiv}.\mathrm{terms}\ \mathrm{for}\ \upomega\ \mathrm{and}\ \upphi\ \mathrm{mesons}.\end{array} $$
(6.14)

Various photo- and electro-production cross sections were calculated and confronted with experiment. As an example the ratio of hadron production to electron-positron pair production in the interaction of a 20 GeV photon is about 1/200 for hydrogen and 1/2500 for lead [13]. While this ratio is small, the effect on shower characteristics and on particle identification can in certain cases be significant [for example—see Ref. 14—when studying CP violating ππ final states in KL decays, for which πeν decays are a background source].

6.2.2 Electromagnetic Showers

When a high energy electron, positron or photon impinges on a thick absorber, it initiates an electromagnetic cascade as pair production, bremsstrahlung and Compton effects generate electrons/positrons and photons of lower energy. Electron/positron energies eventually fall below the critical energy, and then dissipate their energy by ionization and excitation rather than by particle production. Photons propagate somewhat deeper into the material, being ultimately absorbed primarily via the photoelectric process.

Given the large number of particles (electrons, positrons, photons) present in a high energy electromagnetic cascade (more than one thousand for a 10 GeV electron or photon in lead), global variables have been sought to describe the average shower behaviour. Scale variables, such as X 0 as unit length, can be used to parameterize the radiation effects. However, since energy losses by dE/dx and by radiation depend in a different way on material characteristics, one should not expect perfect ‘scaling’.

Analytical Description

In an analytical description [6] a first simplification consists in ‘factorizing’ the longitudinal development and the lateral spread of showers, with the assumption that the lateral excursion of electrons and photons around the direction of the initial particle does not affect the longitudinal behaviour and in particular the ‘total track length’ (see below).

As for any statistical process the first goal is to obtain analytical expressions for average quantities. Particularly relevant (for a shower of initial energy E 0) are: c(E 0,E,t) the average number of electrons plus positrons with energy between E and E + dE at depth t (expressed in radiation length), and the integral distribution \( C(E_0,\!E,\!t)=\int_{0}^{E}c(E_{0},\!E\hbox{'},\!t)\mathrm{d}E\hbox{'};n(E_0,\!E,\!t) \) and N(E 0,E,t) are the corresponding functions for photons.

Using the probability distribution functions of the physical effects driving the shower evolution (Bremsstrahlung, Compton, dE/dx, pair production) one can write and solve [15, 16] ‘evolution equations’ correlating C(E 0,E,t) and N(E 0,E,t). In the so called ‘approximation B’ of Rossi, the energy loss of electrons by dE/dx is taken as constant, and the pair production and bremsstrahlung cross-sections are approximated by their asymptotic expression.

As an illustration, Fig. 6.8 shows the number of electrons and positrons as a function of depth, in a shower initiated by an electron of energy E 0, and by a photon of energy E 0 in units of the “Rossi critical energy ε0” (see Sect. 6.2.1). These distributions are integrated over E from 0 to the maximum possible. The area under the curves is to a good approximation equal to E 0/ε 0, in accordance with the physical meaning of ε 0. The two sets of curves also show that a photon initiated shower is shifted on average by about 1 X 0 to larger depths compared to an electron (or positron) initiated one.

Fig. 6.8
figure 8

Number of charged secondaries as a function of shower depth, for an electron initiated shower (full lines) and a photon initiated one (dashed lines), calculated analytically by Snyder [15, 16]. The numbers attached to each set of curves indicate ln(E00)

The total track length \( TTL=\int_{0}^{\infty}C(E_{0},0,t)\mathrm{d}t \) the energy transferred to the calorimeter medium by dE/dx, the source of the calorimeter signal.

Results from Monte Carlo Simulations

While analytical descriptions are useful guidelines, many applications require the use of Monte-Carlo (MC) simulations reproducing step by step, in a statistical manner, the physical effects governing the shower formation. For several decades, the standard simulation code for electromagnetic cascades has been EGS4 [17]. A recent alternative is encoded in the Geant4 framework [18].

As an illustration of the additional information obtained by this MC approach, Fig. 6.9 shows results of a 30 GeV electron shower simulation in iron (E c = 22 MeV). The energy deposition per slab (dt = 0.5X 0) is shown as a histogram, with the fitted analytical function (see below) superimposed. This distribution is close, but not identical, to the distribution of electrons above a certain threshold (here taken as 1.5 MeV) crossing successive planes (right-hand scale): the energy deposition is slightly below the number of electrons at the beginning of the shower, and somewhat higher at the end. Multiple scattering (see below), affecting more the low energy shower tail, is one effect contributing to this discrepancy. The distribution of photons above the same threshold of 1.5 MeV is shifted to larger X 0 with respect to the electron distribution, reflecting the higher penetration power of photons already mentioned.

Fig. 6.9
figure 9

EGS4 simulation of a 30 GeV electron-induced cascade in iron. The histogram is the fractional energy deposition per radiation length, and the curve is a gamma function fit to the distribution. The full (open) points represent the number of electrons (photons) with energy greater than 1.5 MeV crossing planes at X 0/2 intervals

As a further illustration of the power of MC simulations, Fig. 6.10 displays longitudinal profiles of 10 GeV electron showers obtained by Geant4 simulation in lead, copper and aluminium. Since the dE/dx per X 0 is relatively more important in low Z material compared to high Z materials, one expects showers to penetrate more deeply in high Z materials, a fact born out by the simulations. Illustrating the energy dependence of shower parameters Fig. 6.11 displays shower energy deposition as a function of depth (shower profiles) for a range of incident electron energies (1 GeV to 1 TeV) in lead. The position of the shower maximum shows the expected logarithmic dependence on incident energy. In the parameterisation of shower profiles by Longo and Sestili [19].

$$ F\hspace*{-3pt}\left(E,t\right)={E}_0\ b{(bt)}^{\boldsymbol{a}-\boldsymbol{1}}\ {\mathrm{e}}^{-\boldsymbol{bt}}/\Gamma (a) $$
(6.15)
Fig. 6.10
figure 10

Fractional energy deposition per longitudinal slice of 1 X 0 for 10 GeV electrons in aluminium (full line), copper (dashed) and lead (dash-dotted) (Geant4)

Fig. 6.11
figure 11

Fractional energy deposition in lead, per longitudinal slice of 1 X 0, for electron induced showers of 1 GeV (full line), 10 GeV (dashed), 100 GeV (dash-dotted) and 1 TeV (dotted) (Geant4)

one finds accordingly t max = (a − 1)/b, well fitted by t max = ln(y) + C i, (C i = 0.5 for photons, −0.5 for electrons, and y = E/E c).

Finally, Fig. 6.12 illustrates the imbalance between electrons and positrons: in an electromagnetic shower, and rather material independent, about 75% of the energy deposited by charged particles is due to electrons, and 25% to positrons. This imbalance is due to the Compton and photoelectric effects which generate only electrons. It is more important towards the end of the shower.

Fig. 6.12
figure 12

Energy deposited in longitudinal slices of 1 X 0 by electrons (open symbols) and positrons (closed symbols) in a 10 GeV electron shower developing in lead (EGS4)

Lateral Shower Development

Bremsstrahlung and pair creation on nuclei take place without appreciable momentum transfer to the (heavy) nuclei. Bremsstrahlung on electrons of the medium and Compton scattering involve however some momentum transfer. For example, in the Compton interaction of a 2 MeV (0.5 MeV) photon, 6% (16%) of the scattered photons are emitted with an angle larger than 90° with respect to the initial photon direction z. Another important effect contributing to the transverse spread in a cascade is multiple scattering of electrons and positrons.

After a displacement of length l along z, in a medium of radiation length X 0, the projected rms angular deviation along the transverse directions x and y, of an electron of momentum p is:

$$ {\uptheta}_{\mathrm{x,y}}=\frac{\mathrm{E}_{\mathrm{s}}}{\sqrt{2}}\frac{1}{\mathrm{p}\upbeta\mathrm{c}}\sqrt{l/\mathrm{X}_0}$$
(6.16)

and the lateral displacement is

$$ {\updelta}_{\mathrm{x,y}}=\frac{\uptheta_{\mathrm{x,y}}{l}}{\sqrt{3}} $$
(6.17)

with E s = m ec 2 √(4π/α) = 21.2 MeV. The lateral displacement contributes directly to the transverse shower broadening. If, after a step of length l, the electron emits a bremsstrahlung photon, the emission will take place along the direction of the electron after l, thus at some angle (rms θ x,y in both directions) with respect to the initial electron. Since the photon travels on average a considerable distance before materializing (9/7 X 0 if the photon is above a few MeV, significantly more at lower energy, see Fig. 6.6), the angular deviation of the electron gives a second, large contribution to the shower broadening.

In order to quantify the transverse shower spread, it is customary to use as parameter the Molière radius defined as:

$$ {\rho}_{\mathrm{M}}={E}_{\mathrm{s}}\ {X}_0/{E}_{\mathrm{c}}, $$
(6.18)

where ρ M equals √6 times the transverse displacement of an electron of energy E c, after a path (without radiation nor energy loss) of 1 X 0. The most relevant physical meaning of ρ M comes from Monte-Carlo simulations which show that about 87% (96%) of the energy deposited by electrons/positrons in a shower is contained in a cylinder of radius 1 (2) ρ M.

Going back to the expressions of X 0 and E c, it can be seen that their ratio is proportional to A/Z, and thus ρ M is rather independent from the nuclear species, and is essentially governed by the material density. Calculations of ρ M, for some pure materials and mixtures are reported in Table 6.1.

Table 6.1 Properties of calorimeter materials

Comparing as an illustration lead and copper, one observes that the transverse dimensions of showers expressed in mm are essentially the same (because the transverse profiles are almost identical expressed in ρ M (Fig. 6.14) and the ρ M’s are similar), while the shower in copper is (in mm) a factor 2.5 longer (because X 0 (copper) = 14.3 mm against 5.6 mm for lead).

Fig. 6.13
figure 13

90% containment radius R90%(full line), in Molière radius ρ M as a function of shower depth, for 100 GeV electron showers developing in lead. For comparison the longitudinal energy deposition is also shown (dashed line, arbitrary scale) (Geant4)

Fig. 6.14
figure 14

Fractional energy deposition in cylindrical layers of thickness 0.1 ρ M, coaxial with the incident particle direction, for 100 GeV electron-induced showers in aluminium (dotted line), copper (dashed line) and lead (dash-dotted) (Geant4)

On the other hand, despite being much shorter (in mm), the shower in lead contains about 2.5 more electrons (of lower energy in average) than the shower in copper, in the inverse proportion to their respective critical energies (7.4 MeV for lead against 20 MeV for Copper).

The lateral spread of showers is on average narrow at the beginning, where the shower content is still dominated by particles of energy much larger than E c. In the low-energy tail the shower broadens. Monte Carlo simulations allow studying profiles at various depths. This is illustrated in Fig. 6.13 which shows the 90% containment radius as a function of the shower depth and in Fig. 6.14 which shows the radial profile of showers in three different materials. The broader width in the first 2 or 3 X 0 can be associated with backscattering (albedo) from the shower, which competes with the narrow core of the shower in its very early part. There is almost no dependence of shower transverse profiles (integrated over depth) as a function of initial electron energy.

6.2.3 Homogeneous Calorimeters

For reasons explained later, large calorimeter systems are often ‘sampling’ calorimeters. These calorimeters are built as a stack of passive layers, in general of high Z material for electromagnetic calorimeters, alternating with layers of a sensitive medium responding to (‘sampling’ the) electrons/positrons of the shower, produced mostly in the passive layers.

A homogeneous calorimeter is built only from the sensitive medium. Provided all other conditions are satisfied (full containment of the shower, efficient collection and processing of the signal) homogeneous calorimeters give the best energy resolution, because sampling calorimeters are limited by ‘sampling fluctuations’ (see Sect. 6.2.4). It is instructive to study first the limitations in the “ideal” conditions of homogeneous calorimeters.

Fig. 6.15
figure 15

Pulse height spectra recorded using a sodium iodide scintillator and a Ge (Li) detector. The source is a gamma radiation from the decay of 108mAg and 110mAg. Energies of peaks are labelled in keV

We first discuss low-energy applications, where the absorption does not involve showering. As an illustration, Fig. 6.15 shows the extremely narrow lines observed [20] when exposing a Germanium (Li-doped) crystal to a γ source of 108mAg and 110mAg. The resolution, at the level of one part in a thousand, is far better than obtained with NaI, a frequently used scintillating crystal (see below). Several quantitative studies of the energy resolution of high purity Ge crystals, operated at low temperature (77 K) for γ spectroscopy have been made. A rather comprehensive discussion is given in [21]. After subtraction of the electronics noise, the width of the higher energy lines (above 0.5 MeV) is narrower than calculated assuming statistical independence of the created electron-hole pairs (~2.9 eV are needed to create such a pair). The reason for this was first understood by Fano [22]. Fundamentally it is due to the fact that the pairs created are not statistically independent, but are correlated by the constraint that the total energy loss must precisely be equal to the energy of the incident photon (in the limit of a device in which all energy losses lead to a detected signal, in a proportional way, the line width vanishes).

Calling σ the rms of the energy ε used to create an electron-hole pair, the actual resolution should be σ/(εNp), smaller than 1/√Np by a factor √F, where F = (σ/ε)2 is the Fano factor. Monte-Carlo simulations [23] reproduce the phenomenon and give F ~ 0.1 for semiconductor devices, in reasonable agreement with measurements [21].

When two energy loss mechanisms compete, e.g. ionization and scintillation, the total energy constrain remains, but with a binomial sharing between the two mechanisms. It is thus expected that summing up the two contributions, assumed to be read out independently, will lead to an improved energy resolution (it should be remembered however that a certain fraction of the energy lost in the medium goes to heat.

This was first demonstrated with a liquid argon gridded cell exposed to La ions with an energy of 1.2 GeV/nucleon traversing the cell [24]. In this set-up both scintillation photons and electrons from electron-ion pairs were detected (see Sect. 6.3.3 for the collection mechanism). More recently, detailed studies of scintillation and ionization yields were made in liquid xenon using 662-keV γ-rays from a 137Cs source [25]. With decreasing voltage applied to the sensitive liquid Xe volume, the scintillation signal increases while the ionization one decreases, as expected from recombination of electrons-ions giving rise to additional photons. The spectra obtained with scintillation alone, ionization alone, and their sum are shown in Fig. 6.16, together with the correlation between the two signals.

Fig. 6.16
figure 16

Correlation between scintillation and ionization signals [25]. Scintillation alone (top-left), ionization alone (top-right), sum of both (bottom-left), 2-D correlation between scintillation and ionization (bottom-right)

The ratio between scintillation and ionization depends also on the nature and energy of the particle making the deposit. Low energy nuclear recoils are highly ionizing, giving rise to more recombination and thus an increased light over charge ratio.

As discussed in Sect. 6.3.1, noble liquid detectors (using either argon or xenon) have been developed in the last decade which allowed pushing the limits of dark matter searches. They rely heavily on the existence of two correlated signals (ionization and scintillation) for a given energy deposit, exploiting in particular the ratio between the two to distinguish nuclear recoils from photon or muon background (see Sect. 6.7.2).

When the energy loss per unit length becomes very high (i.e. for low values of β and/or high values of the electric charge Ze for ions) saturation effects are observed in liquid ionization detectors, and also in scintillators. Empirically, the effective scintillation (ionization) signal dL/dx (dI/dx) can be parameterized with “Birks law” [26]:

$$ \mathrm{dL}/\mathrm{dx}=\mathrm{L}_0.\mathrm{dE}/\mathrm{dx}/\hspace*{-3pt}\left(1.+\mathrm{k}_{\mathrm{B}}\ \mathrm{dE}/\mathrm{dx}\right)\hspace*{-2pt}, $$
(6.19)

in which L0 is the luminescence at low specific ionization density. The effect in plastic scintillators, for which kB ~0.01 g.cm−2 MeV−1, results in suppression (“quenching”) of the light emission by the high density of ionized and excited molecules. Deviations from Birk’s law have been observed for high Z ions [27].

In liquid ionization detectors the effect is associated with electron-ion recombination. It depends upon the electric field, in magnitude and direction with respect to the ionizing track. A typical value in liquid argon is kB ~0.04 g.cm−2 MeV−1 for an electric field in a direction perpendicular to the track of 1 kV/cm, with kB being inversely proportional to E, for E < 1 kV/cm [28].

Saturation effects are not relevant for electron or photon induced showers (at least below few TeVs) because the track density remains comparatively low (however, depending on the technique used for sampling calorimeters, internal amplification—like in calorimeters with gaseous readout—may saturate for high track density). Saturation effects do affect hadronic showers because of slow, highly ionizing fragments from nuclear break-up and slow proton recoils.

The—in general excellent—energy resolution of homogeneous calorimeters used for electromagnetic showers is affected by several instrumental effects. One of the most fundamental ones, the existence of a threshold energy Eth above which an electron of the shower does produce a signal will be illustrated in Sect. 6.3.2 when dealing with Cherenkov based electromagnetic calorimeters. Other effects include:

  • longitudinal and transverse shower containment

  • efficiency of light collection

  • photoelectron statistics

  • electron carrier attachment (impurities)

  • space charge effects,…

These effects will be considered when dealing with examples where they are particularly relevant. The closer a detector approaches the intrinsic resolution—like for Ge crystals—the more important are the above limitations. In practice, large calorimeter systems for high energy showers based on homogeneous semi-conductor crystals are unaffordable. Scintillating crystals and pure noble liquids are the best compromise between performance and cost, but do suffer from other limitations, as illustrated in examples given below.

6.2.4 Sampling Calorimeters and Sampling Fluctuations

In the simplest geometry, a sampling calorimeter consists of plates of dense, passive material alternating with layers of sensitive material.

For electromagnetic showers, passive materials with low critical energy (thus high Z) are used, thus maximizing the number of electrons and positrons in a shower to be sampled by the active layers. In practice, lead is most frequently used. Uranium has also been used to optimize the response towards hadrons (Sect. 6.2.7), and tungsten has been used in cases where compactness is a premium.

The thickness t of the passive layers (in units of X 0) determines the sampling frequency, i.e. the number of times a high energy electron or photon shower is ‘sampled’. Intuitively, the thinner the passive layer (i.e. the higher the sampling frequency), the better the resolution should be. The thickness u of the active layer is usually characterized by the sampling fraction f S which is the ratio of dE/dx of a minimum ionizing particle in the active layer to the sum of dE/dx in the active and passive layers:

$$\setcounter{equation}{19}\begin{array}{llll} { {\begin{array}{lll}\displaystyle{f}_{\mathrm{S}}=u\ \mathrm{d}E/\mathrm{d}{x}_{\mathrm{active}}/\left(u\ \mathrm{d}E/\mathrm{d}{x}_{\mathrm{active}}+t\ \mathrm{d}E/\mathrm{d}{x}_{\mathrm{passive}}\right) \left[u,t\ \mathrm{in}\ \mathrm{g}\ {\mathrm{cm}}^{-2},\mathrm{d}E/\mathrm{d}x\ \mathrm{in}\ \mathrm{MeV}/\mathrm{g}\ {\mathrm{cm}}^{-2}\right].\end{array}}} \vspace*{-15pt}\nonumber\\\end{array}$$
(6.20)

This ‘sampling’ of the energy results in a loss of information and hence in additional ‘sampling fluctuations’. An approximation [29, 30] for these fluctuations in electromagnetic calorimeters can be derived using the total track length (TTL) of a shower initiated by an electron or photon of energy E. The signal is approximated by the number N x of e+ or e traversing the active signal planes, spaced by a distance (t + u). This number N x of crossings is

$$ {N}_{\mathrm{x}}= TTL/\left(t+u\right)=E/({\varepsilon}_0\ \left(t+u\right))=E/\Delta E, $$

E being the energy loss in a unit cell of thickness (t + u). Assuming statistical independence of the crossings, the fluctuations in N x represent the ‘sampling fluctuations’ σ(E)samp,

$$ \begin{array}{lll}\sigma {(E)}_{\mathrm{samp}}/E&=\sigma \left({N}_{\mathrm{x}}\right)/{N}_{\mathrm{x}}=1/\surd {N}_{\mathrm{x}}=\surd \left\{\Delta E\left(\mathrm{GeV}\right)/E\left(\mathrm{GeV}\right)\right\}\\ &=0.032\surd \left\{\Delta \mathrm{E}\left(\mathrm{MeV}\right)/\mathrm{E}\left(\mathrm{GeV}\right)\right\}=a/\surd E.\end{array} $$
(6.21)

The detector dependent constant a is the ‘sampling term’ of the energy resolution (see also below). For illustration, for a lead/scintillator calorimeter with 1.4 mm lead plates, interleaved with 2 mm scintillator planes, ∆E = 2.2 MeV, one estimates a ~ 5% for 1 GeV electromagnetic showers. This represents a lower limit (the experimental value is closer to 7 to 8%), as threshold effects in signal emission and angular spread of electrons around the shower axis worsen the resolution [29]. In addition, a large fraction of the shower particles are produced as e+e pairs, reducing the number of statistically independence crossings N x.

The sampling fraction f S has practical consequences, considering the actual signal produced by the calorimeter. If f S is too small, the signal is small and may be affected by electronics noise and possibly other technical limitations due to the chosen readout technique (see below).

The dominant part of the calorimeter signal is actually not produced by minimum ionizing particles, but rather by the low-energy electrons and positrons crossing the signal planes. Defining the fractional response f R of a given layer “i” as the ratio of energies lost in the active layer and of the sum of active plus passive layers one has

$$ {f}_{\mathrm{R}}^{\mathrm{i}}={E}_{\mathrm{active}}^{\mathrm{i}}/({E}_{\mathrm{active}}^{\mathrm{i}}+{E}_{\mathrm{passive}}^{\mathrm{i}}) $$
(6.22)

with the constraint that Σi (E i active + E i passive) = E.

Experimentally one finds that f R (taking all layers together) is significantly smaller than f S [31]. The ratio f R/f S, usually called ‘e/mip’ for obvious reasons, can be as low as 0.6 when the Z of the passive material (lead) is much larger than the Z of the active one (plastic scintillator, liquid argon). This effect, well reproduced by Monte-Carlo simulations, is to some extent due to the “transition effect” between the passive and active material, but also due to the fact that a significant fraction of electrons produced in the high Z passive material by pair production or Compton scattering do not have enough energy to exit this layer and are thus not sampled. This same effect induces a depth dependence of e/mip, which decreases by few percent towards the end of the shower.

Taking into account an energy independent contribution from electronics noise b, and a minimum asymptotic value of the relative energy resolution c (constant term, due for example to inhomogeneities in materials, imperfection of calibrations, …) the energy resolution of a sampling calorimeter is in general written asFootnote 1

$$ \Delta E/E=a/\surd E\oplus b/E\oplus c $$
(6.23)

Experimentally it has been observed that the same relation holds also for homogeneous calorimeters, in general with smaller ‘sampling terms’ a, although their origin is not coming from sampling fluctuations, but from other limitations (see Sect. 6.2.3).

6.2.5 Physics of the Hadronic Cascade

By analogy with electromagnetic showers, the energy degradation of high-energy hadrons proceeds through an increasing number of (mostly) strong interactions with the calorimeter material. However, the complex hadronic and nuclear processes produce a multitude of effects that determine the performance of practical instruments, making hadronic calorimeters more complicated instruments to optimize and resulting in a significantly worse intrinsic resolution compared to the electromagnetic one. Experimental studies by many groups helped to unravel these effects and permitted the design of high-performance hadron calorimeters.

Fig. 6.17
figure 17

Particle spectra produced in the hadronic cascade initiated by 100 GeV protons absorbed in lead (left). The energetic component is dominated by pions, whereas the soft spectra are composed of photons and neutrons. The ordinate is in ‘lethargic’ units and represents the particle track length, differential in log E. The integral of each curve gives the relative fluence of the particle [32]. On the right, same figure for 100 GeV electrons in lead, showing the much simpler structure, dominated by electrons and photons (hadrons are down by more than a factor 100)

The hadronic interaction produces two classes of secondary processes. First, energetic secondary hadrons are produced with momenta typically a fair fraction of the primary hadron momentum, i.e. at the GeV scale. Second, in hadronic collisions with the material nuclei, a significant part of the primary energy is consumed by nuclear processes such as excitation, nucleon evaporation, spallation, etc., generating particles with energies characteristic of the nuclear MeV scale.

The complexity of the physics is illustrated in Fig. 6.17, which shows the energy spectra of the major shower components (weighted by their track length in the shower) averaged over many cascades, induced by 100 GeV protons in lead. These spectra are dominated by electrons, positrons, photons, and neutrons at low energy. The structure in the photon spectrum at approximately 8 MeV reflects a (n,γ) reaction and is a fingerprint of nuclear physics; the line at 511 keV results from e+e annihilation photons. These low-energy spectra encapsulate all the information relevant to the hadronic energy measurement. Deciphering this message becomes the story of hadronic calorimetry.

The energetic component contains protons, neutrons, charged pions and photons from neutral pion decays. Due to the charge independence of hadronic interactions, on average approximately one third of the pions produced will be π0s, f π0 ≈ 1/3. These neutral pions will decay to two photons, π0 → γγ, before reinteracting hadronically and will induce an electromagnetic cascade, proceeding along its own laws of electromagnetic interactions (see Sect. 6.2.2). This physics process acts like a ‘one-way diode’, transferring energy from the hadronic part to the electromagnetic component, which will not contribute further to hadronic processes.

As the number of energetic hadronic interactions increases with increasing incident energy, so will the fraction of the electromagnetic cascade. This simple picture of the hadronic showering process leads to a power law dependence of the two components [33, 34]; naively, the electromagnetic component is F em = 1 − (1 − f π0)n, n denoting the number of shower generations induced by a particle with energy E. For the hadronic fraction F h one finds in a more realistic evaluation F h = (E/E 0)k. The parameter k expresses the energy dependence and is related to the average multiplicity m of a collision, with k = ln (1 − f π0)/ln m. The parameter E 0 denotes the average energy necessary for the production of a pion, approximately E 0 ≈ 2 GeV; with the multiplicity m ≈ 6–7 of hadrons produced in a hadronic collision k is ≈ −0.2. Values of F h are of order 0.5 (0.3) for 100 (1000) GeV showers. As the energy of the incident hadron increases, it is doomed to dissipate its energy in a flash of photons. Were one to extrapolate this power law to the highest particles energies detected calorimetrically, E ≤ 1020 eV more than 98% of the hadronic energy would be converted to electromagnetic energy!

The low-energy nuclear part of the hadronic cascade has very different properties, but carries the dominant part of the energy in the hadronic sector. In the energetic hadron collisions with the nuclei of the calorimeter material, their nucleons will be struck initiating an ‘intra-nuclear’ cascade. In the subsequent steps, the intermediate nucleus will de-excite, in general through a spallation reaction, evaporating a considerable number of nucleons, accompanied by few MeV γ-emission. The binding energy of these nucleons released in these collisions is taken from the energy of the incident hadron. The number of these low-energy neutrons is large: ~ 20 neutron/GeV in lead. The fraction of the total associated binding energy depends on the incident energy and may be as high as ~20–40%. These neutrons will ultimately be captured by the target nuclei, resulting in delayed nuclear photon emission (at the ~ μs timescale). The energy lost to binding energy is therefore, in general, not detected (‘invisible’) in practical calorimeters.

Fig. 6.18
figure 18

Characteristic components of proton-initiated cascades in lead. With increasing energy the em component increases [32]

In Fig. 6.18 the energy dependence of the electromagnetic, fast hadron and nuclear components is shown. The response of a calorimeter is determined by the sum of the responses to these different components which react with the passive and active parts of the calorimeter in their specific ways (see Sect. 6.2.7). Contributions from neutrons and photons from nuclear reactions, which have consequences for the performance of these instruments, are also shown in Fig. 6.18. The total energy carried by photons from nuclear reactions is substantial: only a fraction, however, will be recorded in practical instruments, as most of these photons are emitted with a considerable time delay (~1 μs). The event-by-event fluctuations in the invisible energy dominate the fluctuations in the detector signal, and hence the energy resolution. The road to high-performance hadronic calorimetry has been opened by understanding how to compensate for these invisible energy fluctuations [35].

6.2.6 Hadronic Shower Profile

The total cross section for hadrons is only weakly energy dependent in the range of few to several hundred GeV, relevant for calorimetry. For protons, the total pp. cross section σ tot is approximately 39 mb. For pion-proton collisions σ tot(πp) = 2/3 σ tot(pp) is naively expected, i.e. 26 mb, compared to the measured value of σ tot+p) ≈ 23 mb. For hadronic calorimetry the inelastic cross sections, σ inel(pA) or σ inel(πA), determine the value of the corresponding interaction length, λ int = A/N Aσ inel(hadron, A). On geometrical grounds σ inel(hadron, A) is expected to scale as A 2/3σ inel(hadron, p), close to the measured approximate scaling A 0.71σ inel(hadron, p) and therefore λ int ≈ A 0.29/{N Aσ inel(hadron, p)} [g cm−2].

This characteristic length λ int is the mean free path of high energy hadrons between hadronic collisions and sets the scale for the longitudinal hadronic shower profile. The probability P(z) for a hadron traversing a distance z without undergoing an interaction is therefore P(z) = exp. (−z/λ int). The equivalence with the characteristic distance X 0 for the electromagnetic cascade is evident. In analogy to the parameterization of electromagnetic showers the longitudinal profile of hadronic showers can be parameterized in the form.

$$ \mathrm{d}E/\mathrm{d}x=c\left\{w{\left\{x/{X}_0\right\}}^{\alpha -1}\exp \left(- bx/{X}_0\right)+\left(1-w\right){\left(x/\lambda \right)}^{\alpha -1}\exp \left(- dx/\lambda \right)\right\}. $$
(6.24)

The overall normalization is given by c; α, b, d, w are free parameters and x denotes the distance from the shower origin [36].

Longitudinal pion-induced shower profiles are shown in Fig. 6.19 for different energies together with the analytical shower fits. The longitudinal energy deposit rises to a maximum, followed by a slow decrease due to the predominantly low-energy, neutron-rich part of the cascade. Proton-induced showers show a slightly different longitudinal shape due to the differences in the first few initial collisions. Shower profiles in different materials, when expressed as a function of λ int exhibit approximate scaling in λ int, in analogy to approximate scaling of electromagnetic showers in X 0, see Fig. 6.20. Also shown are the transverse shower distributions: the relatively narrow core is dominated by the high-energy (mostly electromagnetic) component. The tails in the radial distributions are due to the soft, neutron-rich, component. In Fig. 6.21 the fractional containment as a function of energy is shown, exhibiting approximately the expected logarithmic energy dependence for a given containment [38, 39].

Fig. 6.19
figure 19

Measured longitudinal shower distributions for pions at three energies together with the shower parameterization [37]

Fig. 6.20
figure 20

Longitudinal shower development induced by hadrons in different materials, showing approximate scaling in λ. The shower distributions are measured with respect to the face of the calorimeter (left ordinate). The transverse distributions as a function of shower depth show scaling in λ for the narrow core. The 90% containment radius is much larger and does not scale with λ (right ordinate) [30]

Fig. 6.21
figure 21

Measured average fractional containment in iron of infinite transverse dimension as a function of thickness and various pion energies [38, 39]

These results indicate that for 98% containment at the 100 GeV scale a calorimeter depth of 9 λ int is required. At the LHC, where single particles energies in the multi-hundred GeV and jets in the multi-TeV range have to be well measured, the hadrons are typically measured in 10 λ int. For the next jump in collider energy, as is presently studied e.g. for “Future Circular Collider, FCC”, particle and jet energies are approximately a factor 10 higher. For adequate containment, i.e. at the 98% level, calorimeter systems with ~12 λ int will be required, see Fig. 6.22 [40]

Fig. 6.22
figure 22

Total thickness, expressed in λ, to contain up to 98% of a jet as a function of the jet transverse momentum. Mean and peak refer to different statistical measures of containment [40]

6.2.7 Energy Resolution of Hadron Calorimeters

The average properties of the hadronic cascade are a reflection of the intrinsic event-by-event fluctuations which determine the energy resolution. Most importantly, fluctuations in the hadronic component are correlated with the number of spallation neutrons and (delayed) nuclear photons and hence with the energy consumed to overcome the binding energy; these particles from the nuclear reactions will contribute differently (in general less) to the measurable signal.

Let η e be the efficiency for observing a signal E e vis (visible energy) from an electromagnetic shower, i.e., E e vis = η eE(em); let η h be the corresponding efficiency for purely hadronic energy to give a measurable signal in an instrument. Decomposing a hadron-induced shower into the em fraction F em and a purely hadronic part F h the measured, ‘visible’ energy E π vis for a pion-induced shower is.

$$ {E^{\uppi}}_{\mathrm{vis}}={\eta}_{\mathrm{e}}{F}_{\mathrm{e}\mathrm{m}}E+{\eta}_{\mathrm{h}}{F}_{\mathrm{h}}E={\eta}_{\mathrm{e}}\hspace*{-3pt} \left({F}_{\mathrm{e}\mathrm{m}}+{\eta}_{\mathrm{h}}/{\eta}_{\mathrm{e}}{F}_{\mathrm{h}}\right)E, $$
(6.25)

where E is the incident pion energy. The ratio of observable signals induced by electromagnetic and hadronic showers, usually denoted ‘e/π’, is therefore

$$ {E^{\uppi}}_{\mathrm{vis}}/{E^{\mathrm{e}}}_{\mathrm{vis}}={\left(e/\pi \right)}^{-1}={F}_{\mathrm{e}\mathrm{m}}+{\eta}_{\mathrm{h}}/{\eta}_{\mathrm{e}}{F}_{\mathrm{h}}=1+\left({\eta}_{\mathrm{h}}/{\eta}_{\mathrm{e}}-1\right){F}_{\mathrm{h}}. $$
(6.26)

In general η e ≠ η h: in this case, the average response of a hadron calorimeter as a function of energy will not be linear because F h decreases with incident energy. More subtly, for η h ≠ η e, event-by-event fluctuations in the F h and F em components produce event-by-event signal fluctuations and impact the energy resolution of such instruments. The relative response ‘e/π ‘turns out to be the most important yardstick for gauging the performance of a hadronic calorimeter.

A convenient (albeit non-trivial) reference scale for the calorimeter response is the signal from minimum-ionizing particles (mip) which in practice might be an energetic through going muon, rescaled to the energy loss of a mip. Let e/mip be the signal produced by an electron relative to a mip. Assume the case of a mip depositing e.g. α GeV in a given calorimeter. If an electron depositing β GeV produces a signal β/α, the instrument is characterized by a ratio e/mip = 1. Similarly, the relative response to the purely hadronic component of the hadron shower is η hF hE/mip, or h/mip which can be decomposed into h/mip = (f ionion/mip + f nn/mip + f γγ/mip), with f ion, f n, f γ denoting the average fractions of ionizing particles, neutrons and nuclear photons.

Practical hadron calorimeters are usually built as sampling devices; the energy sampled in the active layers, f S (Eq. 6.20), is typically a small fraction, a few percent or less, of the total incident energy. The energetic hadrons lose relatively little energy (≤10%) through ionization before being degraded to such low energies that nuclear processes dominate. Therefore, the response of the calorimeter will be strongly influenced by the values of n/mip and γ/mip in both the absorber and the readout materials.

This simple analysis already provides the following qualitative conclusions for instruments with e/π ≠ 1, as shown conceptually in Fig. 6.23:

Fig. 6.23
figure 23

Conceptual response of a calorimeter to electrons and hadrons. The curves are for a ‘typical’ sampling calorimeter with electromagnetic resolution of σ/E = 0.1/√E(GeV), with hadronic resolution of σ/E = 0.5/√E(GeV) and e/π = 1.4. The hadron-induced cascade fluctuates between almost completely electro-magnetic and almost completely hadronic energy deposit, broadening the response and producing non-Gaussian tails

  • fluctuations in F π0 are a major contribution to the energy resolution;

  • the average value (F em) increases with energy: such calorimeters have a non-linear energy response to hadrons;

  • these fluctuations are non-Gaussian and therefore the energy resolution scales weaker than 1/√E.

This understanding of the impact of shower fluctuations suggests to ‘tune’ the e/π response of a calorimeter in the quest for achieving e/π = 1, and thus optimizing the performance [41, 42].

It is instructive to analyze n/mip, because of the richness and intricacies of n-induced nuclear reactions and the very large number of neutrons with En < 20 MeV. In addition to elastic scattering a variety of processes take place in high-Z materials such as (n, n’), (n, 2n), (n, 3n), (n, fission). The ultimate fate of neutrons with energies En < 1–2 MeV is dominated by elastic scattering; cross-sections are large (~ barns) and mean free paths short (a few centimetres); the energy loss is ~1/A (target) and hence small. Once thermalized, a neutron will be captured, accompanied by γ-emission.

This abundance of neutrons gives a privileged role to hydrogen, which may be present in the readout material. In an n-p scatter, on average, half of the neutron kinetic energy is transferred. The recoil proton, if produced in the active material, contributes directly to the calorimeter signal, i.e., is not sampled like a mip (a 1 MeV proton has a range of ~20 μm in scintillator). The second important n-reaction is the production of excitation photons through the (n,n’,γ) reaction [42].

This difference in neutron response between high-Z absorbers and hydrogen-containing readout materials has an important consequence. Consider the contributions of n/mip as a function of the sampling fraction f S. The mip signal will be inversely proportional to the thickness of the absorber plates, whereas the signal from proton recoils will not be affected by changing f S: the n/mip signal will increase with decreasing f S. Changing the sampling fraction allows to alter, to ‘tune’ e/π. Tuning of the ratio R d = passive material [mm]/active material [mm] is a powerful tool for acting on e/π [41]. This approach works well for high-Z absorbers with a relatively large fission cross section, accompanied by multiple neutron emission. Optimized ratios tend to imply for practical scintillator thicknesses rather thick absorbers with concomitant significant sampling fluctuations and reduced signals.

Fig. 6.24
figure 24

Experimental observation of the consequences of e/π ≠ 1. Shown is the measured pion response in under-compensating, compensating and over-compensating calorimeters; (a) energy resolution σ/EE as a function of the pion energy, showing deviations from scaling for non-compensating devices. (b) Signal per GeV as a function of pion energy, exhibiting signal non-linearity for non-compensating detectors [41]

How tightly are the various fluctuating contributions to the invisible energy correlated with the average behaviour, as measured by e/π? A quantitative answer needs rather complete shower and signal simulations and confirmation by measurement. Two examples are shown in Fig. 6.24. One observes a significant reduction in the fluctuations and an intrinsic hadronic energy resolution of σ/E ≈ 0.2/√E(GeV) for instruments with e/π ≈ 1 [39, 41, 42]. The intrinsic hadron resolution of a lead-scintillator sampling calorimeter may even be as good as σ/E < ≈ 0.13/√E(GeV) [43].

Detectors achieving compensation for the loss of non-detectable (‘invisible’) energy, i.e., e/π = 1, are called ‘compensated’ calorimeters.

There are several further negative consequences if e/π ≠ 1 in addition to reduced resolution. The energy resolution which no longer scales with 1/√E, is usually parameterized as σ/E = a 1/√E ⊕ a 2, where a ‘constant’ term a 2 is added quadratically, even though physics arguments suggest a 2 = a 2(E). Since the fraction of π0-production F π0 increases with energy, such calorimeters have a non-linear energy response. Furthermore, given that the average hadronic fraction F h are different for pions (F h(π)) and protons (neutrons) (F h(p)), typically F h(π) ~ 0.85F h(p), the response in calorimeters with e/π ≠ 1 depends on the hadron species [42].

The effects of e/π have been observed [41] (Fig. 6.24) and evaluated quantitatively [42]. Measurements and Monte Carlo simulations of the response of various calorimeter configurations are shown in Figs. 6.25 and 6.26.

Fig. 6.25
figure 25

Contributions to and total energy resolution of 10 and 100 GeV hadrons in scintillator calorimeters as a function of thickness of (a) uranium plates and (b) lead plates. The scintillator thickness is 2.5 mm in both cases. The dots in the curves are measured resolution values of actual calorimeters [42]

Fig. 6.26
figure 26

Monte Carlo simulation of the effects of e/π ≠ 1 on energy resolution (a) and linearity (b) of hadron calorimeters [42]

Besides achieving “intrinsic compensation” with e/π = 1, effective compensation can be achieved by recognizing event by event independently the em fraction F em and the hadronic fraction F h, respectively. In instruments with a fine-grained longitudinal and lateral subdivision the different em and hadronic shower shapes provide an approximately independent determination of the two components and the basis for their off-line weighting, resulting in an effective e/π = 1 (see Sect. 6.7.5). Alternatively, the em component and the hadronic component in the shower may be measured independently with a dual readout: one active medium is only sensitive to Cherenkov radiation, predominantly caused by the em component, while the charged particles are measured e.g. with a scintillator, see Sect. 6.3.3.

To complete the analysis of the contributions to the energy resolution we need to consider sampling fluctuations, assuming fully contained showers and no degradation due to energy leakage. For electro-magnetic calorimeters a simple explanation and an empirical parameterization holds (Eq. 6.21): σsamp(em)/E = c(em) · (ΔE(MeV)/E(GeV))1/2, where ΔE is the energy lost in one sampling cell and c(em) ≈ 0.05 to 0.06 for typical absorber and readout combinations.

Similar arguments apply for the hadronic cascade; empirically, one has observed [30, 43] that.

$$ {\sigma}_{\mathrm{samp}}\left(\mathrm{h}\right)/E=c\left(\mathrm{h}\right)\cdotp {\left(\Delta E\left(\mathrm{MeV}\right)/E\left(\mathrm{GeV}\right)\right)}^{1/2}\ \mathrm{with}\ c\left(\mathrm{h}\right)\approx 0.10. $$
(6.27)

For high-performance hadron calorimetry sampling fluctuations cannot be neglected.

The foundations of modern, optimized hadron calorimetry can be summarized as follows:

  • the key performance parameter is e/π = 1, which guarantees linearity, E −1/2 scaling of the energy resolution, and best intrinsic resolution;

  • by proper choice of type and thickness of active and passive materials the response can be tuned to obtain (or approach) e/π ~ 1;

  • the intrinsic resolution in practical hadron calorimeters can be as good as (σ/E) · √E < ~ 0.2;

  • sampling fluctuations contribute at the level of σ/E ≈ 0.10 (ΔE(MeV)/E(GeV))1/2.

6.2.8 Muons in a Dense Material

The velocity dependence of the average energy loss by collisions of singly charged particles (muons, pions, protons, …) with electrons of the traversed medium differs slightly from formula (6.1) and is given by:

$$ -\frac{\mathrm{dE}}{\mathrm{dx}}=\mathrm{k}\frac{\mathrm{Z}}{\mathrm{A}}\frac{1}{\upbeta^2}\left[\mathrm{ln}\frac{2{\mathrm{m}}_{\mathrm{e}}{\mathrm{c}}^2{\upgamma}^2{\upbeta}^2}{\mathrm{I}}-{\upbeta}^2-\frac{\updelta }{2}\right]\ \left(\mathrm{MeV}/\left(\mathrm{g}/{\mathrm{c}\mathrm{m}}^{-2}\right)\right) $$
(6.28)

where δ ≈ ln(γ) accounts for screening effects at high energy. As a function of energy of the incident particle the most probable value shows a slow increase (relativistic rise) followed by a plateau whose value depends on the density of the material. The energy loss reaches a minimum for γβ ~ 3, corresponding to muon energies of few hundred MeV.

At a given energy, the energy loss distribution of −dE/dx in a slab of material has an asymmetric distribution around its most probable value, usually referred to as the “Landau-Vavilov” distribution [44, 45]. The muon energy loss in dense materials has been extensively studied [46]. Both, the absolute energy loss and the straggling function agree with measurements at the percent level [47] up to several hundred GeV.

For muon energies above ~100 GeV, bremsstrahlung, pair production and deep inelastic scattering start to contribute, generating tails in the energy distribution (‘catastrophic energy loss’) [48, 49]. As an illustration, the average contribution of these processes for muons in iron up to 100 TeV is shown in Fig. 6.27. Very roughly speaking a muon behaves as an electron with a critical energy scaled as ≈ (m μ/m e)2. However, unlike for electrons or positrons, pair production is larger than bremsstrahlung.

Fig. 6.27
figure 27

Contributions to the energy loss of muons in iron, as a function of the muon incident energy. The total energy loss in hydrogen gas and uranium is also shown

Momentum correction to muon momenta can be applied, in setups where muons traverse a calorimeter before entering the muon spectrometer. For muons above ~10 GeV/c there is a good correlation between the total energy loss of muons in a calorimeter with the energy loss recorded in the active medium.

This is valuable, particularly for ‘catastrophic’ muon energy loss. Event-by-event correction for the muon energy loss is therefore useful in the hundred GeV momentum range for muon spectrometers behind the calorimeter with few percent momentum resolution [39].

Energy calibration and monitoring is frequently and conveniently done with muons. Exposing a calorimeter to a beam of electrons with well-known energy sets the ‘electron-energy scale’.

In sampling calorimeters muons deposing a given energy produce in general more signal than electrons having deposited the same energy: e/μ < 1. While establishing an absolute energy scale with muons requires very careful MC cross-checks, it is very convenient to use muons as a monitor of the calorimeter response as a function of time during data taking and as intercalibration tool between different parts of a calorimeter set-up [50]. The use of muons allows to transfer the absolute energy calibration established in a test beam to the experimental facility and to follow the energy calibration in situ using muons from physics channels. However, given the large dynamic range of energy measurements in many experiments, e.g. at the LHC and the smallness of the muon signal, complimentary calibration methods are necessary to achieve the required accuracy, see Sect. 6.3.6.

6.2.9 Monte Carlo Simulation of Calorimeter Response

Modern calorimetry would not have been possible without extensive shower simulations.

The first significant use of such techniques aimed to understand electromagnetic calorimeters. For example, electromagnetic codes were used in the optimization of NaI detectors in the pioneering work of Hofstädter, Hughes and collaborators [51]. One code, EGS4, has become the de facto standard for electromagnetic shower simulation [17]. Early hadronic cascade simulations were motivated by experimental work in cosmic-ray physics [52] and sampling calorimetry [53]. However, it were the codes developed by the Oak Ridge group [54], with their extensive modelling of nuclear physics, neutron transport, spallation and fission, which are indissociable from the development of modern hadron calorimetry [35].

Modern, high precision calorimetry and related applications have imposed a new level of stringent quality requirements on simulation:

  • in many applications, electromagnetic effects have to be understood at the 0.1% level, hadronic effects at the 1% level;

  • ‘unorthodox’ calorimeter geometries (Sect. 6.7) have to be optimized with simulation tools providing sophisticated interfaces to shower codes:

  • in modern calorimeter facilities the energy deposits are usually distributed over several systems of different geometries and materials. Simulation codes are pushed to their limits in translating the recorded signal into a 1% precision energy measurement;

  • at LHC and in particular in the study of the UHE Cosmic Ray Frontier simulation codes are used to extrapolate measured detector response by one to eight (!) orders of magnitude;

  • particle physics MC codes are applied to areas outside particle physics, such as of radiation shielding, nuclear waste incineration and medical radiation treatment.

First, we will describe the general approach to these simulation issues before addressing some specific points. Regular conferences on this subject provide a good overview [55].

Electromagnetic Shower Simulation

For decades EGS4 [17] has been the standard to simulate electromagnetic phenomena. A modern extended incarnation has been developed by the GEANT4 Collaboration [18]. It includes the full panoply of radiation effects, including photons from scintillation, Cherenkov and Transition radiation up to electromagnetic phenomena relevant at 10 PeV.

Hadronic Shower Simulation

The simulation must cover the physics and the corresponding cross-sections from thermal energies (neutrons) up to (in principle) the 1020 eV frontier, requiring many different physics models; program suites, ‘toolkits’, such as GEANT4 [18], provide the user with choices of physics interaction models to select the physics interactions and particle types appropriate to a given experimental situation.

At high energies (~15 GeV to ~100 TeV)—in addition to measured cross sections—models describing the hadron physics are used, such as the ‘Quark Gluon String’ model [18], Fritiof or Dual Parton Models [56]. Such models are coupled to descriptions of the fragmentation and de-excitation of the damaged nucleus. At the highest energies other models, such as ‘relativistic Quark Molecular Dynamics’ models are being developed [57].

In the intermediate energy range (<10 GeV) Bertini-style cascade models [58] are employed to describe the intra-nuclear cascade phenomena. These models use measured cross-sections and angular distributions.

For the very low energy (<20 MeV) domain neutron transport codes have been developed, using experimental cross-sections.

The different energy regimes covered by these models are connected with parametric descriptions, in which cross-sections are parameterized and extrapolated over the full range of hadronic shower energies. Well-known examples are Geisha [59] and to a certain extent GCalor (or GEANTCalor) [60].

Applications: Illustrative Examples

We present comparisons of simulation with experiment to illustrate the quality of shower modelling.

(i) Energy Calibration and Reconstruction

Many physics programmes at the modern colliders (HERA, Fermilab, LHC) require energy measurements at the limit of the instrumental resolution and with ~1% accuracy. The calorimeters are frequently composed of different electromagnetic and hadronic instruments, made from different materials and sampling topologies.

Establishing the absolute energy scale in the reconstruction of particles (and jets) needs a major effort to understand the detector, from an instrumental and technical point. It requires a tight interplay between measurements and simulations. Energy calibration and reconstruction, proceeds in several steps. Customarily, a calorimeter (segment) is exposed to electrons, setting the ‘electromagnetic’ energy scale. For hadrons a ‘weighing’ has to be applied to each cell, such that.

$$ {E}_{\mathrm{i}}\left(\mathrm{true}\right)={w}_{\mathrm{i}}{E}_{\mathrm{i}}\left(\mathrm{reconstructed}\right)\ \mathrm{with}\ {w}_{\mathrm{i}}=\left\langle {E}_{\mathrm{i}}\left(\mathrm{true}\right)/{E}_{\mathrm{i}}\left(\mathrm{reconstructed}\right)\right\rangle . $$

E i(true) expresses the total energy deposited. This can be a rather large correction, particularly in non-compensating calorimeters. In a further step, details of the energy reconstruction algorithm (‘clustering’) are simulated to evaluate the energy outside the cluster, usually chosen smaller than the true shower extent. In practical calorimeters, non-sensitive regions (‘dead material’, DM) are unavoidable leading to frequently sizeable corrections evaluated by MC.

Establishing the energy scale for jets is the most complex calibration task. Jets are calibrated with a series of simulation-based corrections and in situ techniques. In situ techniques exploit the transverse momentum balance between a jet and a reference object such as a photon, Z boson or multijet system for jets with 20 < pT < 2000 GeV, using both data and simulation. In this way an uncertainty in the jet energy scale approaching 1% is obtained for high-pT-jets with 100 < pT < 500 GeV/ c. An uncertainty of about 4.5% is found for low-pT jets (pT < 20 GeV/ c), dominated by uncertainties in the corrections for multiple proton-proton interactions (pile-up), see Fig. 6.28 [61].

Fig. 6.28
figure 28

Combined uncertainty in the jet energy scale (JES) of fully calibrated jets as a function of jet pT in the central region of the ATLAS calorimeter system [61]

(ii) Particle Flow Analysis in Calorimeter Systems at Present and Future Colliders

An important recent development is an ambitious analysis strategy for reconstructing the jet energy in calorimeters, the “Particle Flow” concept. It aims at identifying and reconstructing individually each particle arising from the collision (proton-proton, electron-positron,…) by combining the information from all the subdetectors. The resulting particle-flow event reconstruction leads to an improved performance for the reconstruction of jets and “Missing Transverse Energy” (MET). The algorithm also improves the identification of electrons, muons, and taus. While the concept has first been applied in the physics analysis at the LEP collider, it is presently heavly used by the LHC collaborations [62, 63]. The improvement can be dramatic, as shown in Fig. 6.29.

Fig. 6.29
figure 29

Jet resolution for di-jets events in the CMS calorimeter reconstructed with the particle flow (red triangles) and the calorimeters (blue open squares) [63]

The benchmark performance for calorimeter systems (Sect. 6.7.6.2) for future colliders (International Linear Collider, ILC; Future Circular Collider, FCC) aims at a jet energy resolution of σ(jet) ~ 0.3/√E(GeV). This is motivated by the need to measure, e.g. W- and Z-decays into two jets with a mass resolution approaching their natural width, i.e. with ~2 GeV (FWHM). Given that these jets are composed on average of ~60% hadrons, ~30% photons (the rest being shared by slow neutrons, neutrinos, muons, …) a rather conventional resolution of σ(em) ~ 0.15/√E(GeV) and σ(hadronic) ~ 0.5/√E(GeV) would suffice, provided the individual energy deposits can be correctly associated with the individual particles measured in the charge particle spectrometer. This places a new level of performance requirements on the calorimetry in terms of granularity, but also on the correct association of photonic and hadronic energy. Modeling has shown that this performance can be achieved in principle using the concept of ‘Particle Flow Analysis’.[64, 65].

(iii) Ultra-High Energy Modelling

A particularly challenging application of these Monte Carlo techniques is extrapolation beyond present accelerator energies. The use of the Earth’s atmosphere as a hadron calorimeter allows cosmic hadrons and nuclei up to and beyond 1020 eV to be probed. This requires ‘dead-reckoning’ of the detector response based on Monte Carlo techniques. Considerable faith in the extrapolation of the simulation models is needed in establishing the absolute energy scale. The estimate of the primary energy is based on measuring the shower shape: knowledge of F em, the nucleon–nucleon cross-section, particle multiplicities, transverse momentum distributions, etc., all contribute to the estimate of the primary energy.

(iv) Low Energy Performance and Radiation Background

In many applications, e.g. dosimetry, careful modelling of the physics down to the MeV scale is needed. Certain codes [66] have been carefully benchmarked showing agreement to better than 20%, remarkable, as the very low-energy modelling of nuclear physics processes is involved.

Faithful modelling is also necessary to estimate the radiation levels in the LHC experimental caverns. Such modelling [67], based on the FLUKA code, was the basis for a number of design criteria and choices for the ATLAS and CMS experiments.

(v) Medical Applications

In cancer treatment with particle beams the tumour is exposed to proton or light ion beams, such as He or C12, with energies of a few hundred MeV/nucleon. The energy deposition of the beam inside the human body (here the 1/β 2 part of dE/dx is relevant) can be monitored by positron emission tomography (PET), the β+ emitters being produced through nuclear fragmentation reactions of the beam ions with the tissue nuclei.

Both, the patient treatment plan and the interpretation of these images is evaluated with the same MC programs as used in particle physics. More generally, the improvement in radiation treatments achieved with proper (particle physics) quality simulation is very significant, a very important legacy of particle physics to society [68].

We conclude that

  • modern calorimetry owes much to Monte Carlo modelling;

  • as always, predictions have to be taken with circumspection, in particular the extrapolation to performance and energy regimes inaccessible to experimental checks. Caveat emptor.

6.3 Readout Methods in Calorimeters

6.3.1 Scintillation Light Collection and Conversion

Scintillator materials used in calorimetry are inorganic crystals, organic compounds and noble liquids. Dense inorganic crystals represent one of the best techniques for homogeneous electromagnetic calorimetry. These crystals are insulators with a normally empty conduction band. When energy is deposited in the crystal, an electron can jump into the conduction band and cascade to the valence band by intermediate acceptor levels, part of the energy being emitted as light. The emitted light needs to be in the wavelength range where good photodetectors are available, and the crystal must be transparent to this wavelength range. The lifetime of the light emission depends on the concentration of acceptor levels, and temperature. In general, different decay times are present in the light luminescence spectrum of a given crystal (see also Chap. 3).

A list of commonly used scintillators, with some of their characteristic properties is given in Table 6.2. Crystals for homogeneous calorimetry are usually shaped as bars, typically of ~25 X 0 length and ~ 1 × 1 ρ M transverse size. In colliding beam detectors, the cylindrical geometry leads in general to the use of tapered bars, with the incident radiation impinging on the smaller face. The growth of good quality ingots, followed by sawing and polishing to the needed size and surface quality requires specialized tooling available in industry. Careful packaging of the crystal in appropriate material (Tywek or equivalent) and sometimes lateral masking are needed to minimize the response dependence on position, transversally and longitudinally. The light detector (photomultiplier, photodiode, …) is optically coupled to the back face of the crystal. The overall light yield, including the area and quantum efficiency of the transducer, influences the achievable energy resolution. A light yield of 1 photoelectron per MeV implies that the energy resolution cannot be better than σ(E)/E = 3%/√E (GeV). The number of emitted photons per MeV is in general much larger, being for example 4·104 in NaI doped with Thallium, one of the best scintillating crystal in terms of light yield. PbWO4 produces ~150 times less light than NaI, but is far superior in other aspects (density, radiation resistance). New (and expensive) materials, like LYSO (a compound of Lutetium) are being developed for applications requiring fast response and high light yield.

Table 6.2 Properties of scintillating crystals applied in particle physics experiments
Fig. 6.30
figure 30

Working principle of a photomultiplier. The electrode system is mounted in an evacuated glass tube

A photomultiplier is schematically sketched in Fig. 6.30. All elements are located in an evacuated glass envelope. At the photocathode an electron is extracted by the photo-electric effect. A voltage difference accelerates the electron towards the first dynode out of which several electrons are extracted by secondary emission. This process is repeated over ~10 dynodes up to the anode at the highest (~1000 to 2000 volts) positive potential. With a sufficiently large gain at the first dynode the fluctuation of the number of electrons in the final charge pulse is dominated by the Poisson fluctuation of the number of photo-electrons. Amplification factors of several thousands are typical. A careful design of the High Voltage divider chain is mandatory to avoid non-linear effects. With recently developed “super bi-alkali” photocathodes (Cs-K) the quantum efficiency can reach more than 40% at 400 nm wavelength. For short wavelengths the efficiency is determined by the transparency of the entrance window. Quartz, CaF2 or even LiF windows are necessary when efficiency in the near UV is required.

Because of their sensitivity to external magnetic fields, their rather large size and their cost, photomultipliers are nowadays being replaced by devices with less internal gain, followed by a high gain low-noise amplifier. Besides photo-triodes, the new devices are solid state based, like photodiodes or Avalanche Photo-Diodes (APD) [69]. Both offer good quantum efficiency, magnetic field insensitivity, moderate cost, small volume and—for APDs -a significant charge gain. The amplification is however accompanied by an “excess noise factor”, of typically a factor 2 for a gain of ~50. This, together with the reduced size (and hence light collection) as compared to photocathodes can affect the energy resolution. The light detection and electron multiplication take place (see Fig. 6.31) in a thin layer (<40 μm) which lowers the sensitivity of APDs to minimum ionizing particles traversing the detector, as compared to simpler photodiodes.

Fig. 6.31
figure 31

Schematic diagram showing the structure of an avalanche photo-diode (APD)

The concept of APDs was extended to “Silicon Photomultipliers” by dividing the surface exposed to photons into small pixels, in a number large enough that each of them receives at most one photon.

Operating the device in the Geiger mode-i.e. with a very large gain-, and summing the current of a large number of pixels, one obtains effectively the equivalent of an analogue response to the number of incident photons, while each pixel operates in a binary mode.

Since the pioneering work [70], these devices have seen an extremely fast development [71]. A sketch of the layout of a SiPM is shown in Fig. 6.32.

Fig. 6.32
figure 32

Schematic diagram showing the structure of a Silicon photomultiplier (SiPM)

Crystal calorimeters are the choice technology for precision electromagnetic calorimetry at medium energy machines like B-factories. CsI was used by Babar and Belle, and is used again for Belle II. The L3 experiment at LEP used BGO with success. However, the energy resolution reached for high energy electrons or photons (~50 GeV and above) was limited by the difficulty to calibrate a large system (constant term of the energy resolution, see Eq. (6.23), of about 1% for the L3 BGO system) and not by the intrinsic resolution of the BGO crystals.

CMS and ALICE (for a part of its angular coverage) at the LHC decided to use PbWO4. The most challenging case is CMS, given the very large size of the EM calorimeter, and the high radiation levels in the high luminosity collision points of the LHC, with nominally 500 fb −1 of integrated luminosity at 14 TeV. More details are given in Sect. 6.7.3.

In some applications crystals are read on both ends, providing longitudinal information. However, so far it has not been possible to split the crystals longitudinally in independent segments without degrading the performances, a limitation for particle identification (see Sect. 6.4.3).

Noble liquids are also good, fast scintillators. Table 6.3 gives the properties of liquid argon, krypton and xenon already used in several practical cases for their scintillation properties.

Table 6.3 Properties of noble liquids used in particle physics experiments

In liquid argon about 4.104 photons are emitted per MeV deposited, a number very close to what is quoted for NaI. The light is however emitted in the far ultraviolet range, which complicates the conversion to electrical signals. Recent work [72] has shown that the scintillation light emitted by helium in the extreme vacuum ultraviolet range (~80 nm) can be used for particle detection, thanks to wavelength shifters (see below). The mechanism of scintillation in noble liquids involves the formation of excited diatomic molecules around the primary ions, which decay to free atoms by emitting radiation. In order to keep the emitted light associated with a well-defined region of space, thin reflecting boxes can be introduced in the liquid volume. At present, one of the largest size detectors using light from noble liquids is the xenon calorimeter of the MEG experiment [73] (see also Sect. 6.7.1). As already mentioned in 6.2.3, the search for dark matter has triggered the development of several large size experiments using liquid xenon. These experiments [74] exploit both the scintillation and the ionization signal of the sought for nuclear recoils. Ionization electrons are preferentially transported to the surface of the liquid bath where, in a high electrical field region, they are extracted with high efficiency [74] and accelerated in the gas phase, giving in turn rise to (delayed) light emission. One example is described in Sect. 6.7.2.

Future long baseline neutrino experiments of very large size, like the DUNE [75] project at Fermilab envision liquid argon detectors of several tens of kilotons. DUNE will exploit both the scintillation and the ionization signals. In one of the read-out options, called “single-phase”, the ionization signal is directly collected by a set of wires, each equipped with a readout chain, in order to have access to details of all secondary produced particles. The other option, “dual-phase”, is close to what is described above for dark matter searches.

Liquid scintillators have been used abundantly in neutrino experiments, either in totally active large volume detectors, like Kamland and SNO, or as a large array of tubes filled with doped mineral oil.

The most recent example of the latter is NOvA [76] in which each tube is read out by means of a wavelength shifting fiber connected to a single pixel of an APD. The chapter on neutrino detectors provides further details.

Plastic scintillator plates, such as Polymethylmetacrylate (PMMA) doped with organic scintillator, have been used for electromagnetic and even more extensively for hadronic sampling calorimetry. The principal difficulty using this technology is the light extraction. The dimension of scintillator tiles of typically 10 cm × 10 cm size and 0.5 cm thickness would require light guides of typically 10 cm × 0.5 cm section in order to extract the light while preserving the emission phase space (respecting Liouville’s theorem), a very difficult task in realistic detector layouts.

An elegant solution is the use of wavelength shifters [77, 78] in which due to their isotropic emission a constant fraction of the light is transported from the scintillating tile to a small rod, or even a plastic fibre separated from the tile by an air gap. The principle is shown in Fig. 6.33. Many calorimeter facilities at colliders were built following this principle, see also Sect. 6.7.

Fig. 6.33
figure 33

Wavelength shifter readout of a scintillator

In a further development, detectors capable of accommodating smaller transverse granularities (like 5 cm × 5 cm) were proposed, like the “Shashlik” concept in which readout fibres cross the scintillating tile and the passive converter perpendicularly to their faces [79]. Originally considered in CMS, this scheme was later chosen by the LHCb experiment at the LHC for its electromagnetic calorimeter. A sketch of the arrangement of absorbers, scintillating tiles and fibers is shown in Fig. 6.34.

Fig. 6.34
figure 34

The “shashlik” concept as realized in the LHCb Electromagnetic calorimeter

Even more ambitious was the “Spaghetti” calorimeter [80, 81] in which each calorimeter cell (typically 1 × 1 ρ M transverse size and 25 X 0 deep) is built out of scintillating fibres embedded in a lead matrix, oriented parallel to the long side of the block. The electromagnetic calorimeter of the KLOE [82] experiment at the DAFNE electron–positron collider in Frascati was built along these principles-although with a different geometry—and gave excellent results in the energy range of this machine.

6.3.2 Cherenkov Light Collection and Conversion

Although much less intense than scintillation light in good scintillators, Cherenkov radiation represents in some cases an interesting alternative. When a charged particle (electron or positron in the case of an electromagnetic shower) propagates in a transparent medium with a speed βc, larger than the speed of light c/n in this medium, an electromagnetic wave forms along a cone of half-angle θc = Acos (1/βn) with respect to the incident particle direction, and with a number N of emitted photons in the visible range (400 to 700 nm) per unit length:

$$ \mathrm{d}N/\mathrm{d}x=490\ {\sin}^2{\theta}_{\mathrm{c}}\ \left[{\mathrm{c}\mathrm{m}}^{-1}\right]. $$
(6.29)

Lead glass, a dense material with a high index of refraction, has been used in several experiments (in particular OPAL [83] at LEP) with very similar geometries (tapered bars) as described above for scintillating crystals. The energy resolution is limited by the number of electrons and positrons in the shower above the Cherenkov threshold, resulting in a stochastic term σ(E)/E of > ~ 5–6%/√E, comparable to very good sampling calorimeters. Given the small number of photons, readout with photomultipliers is mandatory. As for crystals, longitudinal segmentation is in general not feasible. In several cases, “preshowers” of a few X 0 depth, instrumented with another higher granularity readout technique, have been used in front of lead glass arrays, in order to improve particle identification (see Sect. 6.4.3). Another limitation for large collider systems is the reduced response of lead glass to hadronic showers (a large fraction of the hadronic cascade is made of non-relativistic particles), inducing a performance limitation for hadronic calorimetry. However, the preponderance of Cherenkov-light production from electrons and positrons, i.e. the electromagnetic part of the hadronic shower, offers an interesting possibility. A hadronic sampling calorimeter instrumented with two sets of fibres—one set sensitive to Cherenkov-light only, the other set consisting of scintillating fibres, sensitive to all charged particles—can measure separately the electromagnetic component of the hadronic shower. This possibility is being studied in the dual-readout “DREAM” project. Test beam results are reported in Ref. [84].

Exploiting only the Cherenkov component, an hadronic calorimeter made of quartz fibers (parallel to the beam axis) embedded in an iron matrix has been chosen for the very forward calorimeter of the CMS experiment (for the pseudorapidity region up to 5). This choice was motivated by the high radiation resistance of quartz fibers, well adapted to this harsh environment [85].

Energy measurement with Cherenkov light produced in water was used with great success in very large detectors for nucleon decay and solar neutrino experiments, like Superkamiokande [86]. For the required detector volume of 50,000 tons water, the Cherenkov light was read out using large photomultipliers. In Superkamiokande, 50% of the outer surface of the detection volume is covered by 50 cm diameter phototubes. Electrons of 10 MeV are reconstructed with an energy resolution of about 15%. Their position in the detector volume is reconstructed with an accuracy of 70 cm and their direction with an accuracy of ~25 degrees. The detector also provides some discrimination between electrons (showering) and muons (single Cherenkov cone).

6.3.3 From Ionization to Electrical Signal in Dense Materials

One major avenue for calorimetry instrumentation is the measurement of the ionization charge produced in dense, active materials. In the presence of an applied electric field the charges move, inducing a current in readout electrodes proportional to the liberated charge and hence to the energy deposited by the showering particle. Electric charges are much easier to transport and to collect compared to light, which is the basic, decisive advantage of this concept.

This technique was introduced in the early 1970s [87] using liquefied argon as the active material. It has matured into one of the most widely used methods for calorimetry instrumentation, in particular, of sampling calorimeters. Noble liquid ionization calorimeters offer a number of attractive advantages, especially for instruments in the difficult environment of colliders. They are characterized by intrinsic stability and excellent uniformity of response (the only amplification is in the electronics chain which is fairly easy to calibrate), relative ease of a high segmentation and reasonable cost.

Other materials than argon are suitable for this method of detection, in particular the heavier noble liquids (Kr, Xe). In liquid helium and liquid neon, electrons are trapped in nano-scale cavities, and drift with characteristic speeds about a thousand times slower than electrons in other noble liquids. Solid neon was found to be usable at low rate [88]. Some saturated molecules like Tetramethylpentane (TMP), which is a liquid at room temperature, have also been tried. High purity at the ppb-level, required to avoid electron trapping, has limited their use compared to noble liquids, which however require cryogenic operation. The properties of noble liquids for ionization calorimetry are given in Table 6.3. Besides the value of dE/dx and X 0 specific to the material, important parameters are the mean energy needed to create an electron-ion pair, the electron drift speed as a function of the electric field, and the dielectric constant, which affects the capacitance of a readout cell. Since the ions have a much smaller drift velocity compared to electrons, a track crossing a gap (and depositing charge uniformly) will give rise to a triangular current (see Fig. 6.35) given by Eq. (6.30) where +Q 0 and –Q 0 are the liberated charges, d the gap, and \( v \) the drift velocity of electrons. The resulting current is

$$ I(t)= Qv/d $$
(6.30)
Fig. 6.35
figure 35

Current induced by charges drifting in the sensitive gap of an ionization calorimeter. Left: charges drifting in the gap; right: current from drifting charges (triangle), and after CR-RC2 shaping. The dots every 25 ns represent times where the signal is being sampled (40 MHz sampling)

with \( Q=Q_0(1-vt/d)\). This formula is easily derived by remembering that a point charge q at a distance x from one of the parallel planar electrodes defining the gap of width d, induces a charge –q(d − x)/d on this electrode, and –xq/d on the other one.Footnote 2

Depending on the rate of particles hitting a given cell, the readout can be an integrated charge readout (this charge is equal to Q0/2 for uniform charge deposition in the gap) or a current readout. In the first case, the response is rather slow (~400 ns for a 2 mm gap in LAr). In the latter (Fig. 6.35) the response can be much faster (~40 ns rise time with a suitable CR-RC2 electronics filtering) but the signal to noise ratio is worse given that less “equivalent” charge is sampled, and the bandwidth of the electronics needs to be larger. At high speed (current readout) the limitation comes from the capacitance and inductance of the elementary readout cell, which must be kept appropriately small.

For LHC applications the optimization for high rate requires current readout with fast shaping, together with high granularity to limit pile-up of showers from consecutive events. While the electronics noise decreases when the electronics response becomes slower, the pileup noise generated by low energy particles from consecutive events increases. The shaping time is optimum when the two contributions are equal (see Fig. 6.36). One of the most ambitious realizations is the electromagnetic calorimeter of the ATLAS experiment at the LHC, which uses an ‘accordion’ geometry [89] to achieve the LHC performance specifications. This geometry provides full azimuthal symmetry without “cracks” between adjacent modules. The geometry, which includes three samplings in depth, is shown in Fig. 6.37. More details about the ATLAS calorimeter are given in Sect. 6.7.4.

Fig. 6.36
figure 36

Optimization of shaping time as a function of preamplifier noise and pile-up noise

Fig. 6.37
figure 37

Conceptual view of the ‘accordion’ geometry

The NA 48 collaboration at CERN developed a homogeneous noble liquid ionization calorimeter [90]. It had a cross-section of 2.5 m × 2.5 m, and was optimized for the study of neutral decays of high-energy neutral kaons. Liquid krypton was chosen as compromise between short radiation length (LXe would be preferable) and acceptable cost (the radiation length of argon is too large for fitting a calorimeter able to contain high energy showers in an acceptable longitudinal space). Readout cells were defined by thin copper-beryllium ribbons stretched in the direction of the beam. The width of the bands (2 cm) and the gap (double gap of 2 × 1 cm) defined readout cells of 2 cm × 2 cm, smaller than the Molière radius of krypton. In order to smooth the sampling of the shower, the bands were given a zigzag shape in depth by passing the ribbons through staggered glass-epoxy frames. The preamplifiers, connected to each signal band through a blocking capacitor, were located in the liquid for best performances. This calorimeter operated at a high voltage of 3 kV (0.3 kV/mm electric field), in a stable way during several years, with performances characterized by a stochastic term of 3.5%/√E, a signal peaking time of 80 ns, a noise per cell of 9 MeV (about 100 cells are needed to reconstruct with high accuracy an electromagnetic shower), a linearity better than 1 part in a thousand between 10 and 90 GeV, and an uniformity of response of 0.5%. Liquid krypton is also being used for the calorimeter (KEDR) of the VEPP2M collider at Novosibirsk [91].

Homogeneous noble liquid calorimeters with very high granularity readout can lead to very interesting imaging and energy measurement properties. One concept, inspired by gaseous tracking chambers (TPCs), was pioneered by the ICARUS collaboration [92, 93]. A more recent example is microBoone at Fermilab [94]. Detectors of this type with long drift distances (1 m or above) find their application in low rate experiments, such as neutrino experiments. The DUNE project, already mentioned, combines the readout of scintillation light and ionization.

A potentially attractive alternative to noble liquids is the use of silicon detectors. However, due to the high cost of silicon diode sensors, the silicon calorimeters operated so far have been restricted to places where the lack of space, and the limited volume, made the use of this technology mandatory. An example is given by SiCal [95], the luminosity calorimeter of the Aleph experiment at LEP. It consisted of a stack of 12 layers of silicon sensors interleaved with tungsten absorber plates, for a total thickness of ~24 X 0 in a longitudinal extension of only 150 mm. High resistivity, n-type (7 kΩcm, 300 μm thickness) Si was used for the 1.3 m2 readout area, divided into 12,228 channels. The primary purpose of the detector was an absolute measurement of the luminosity using Bhabha scattering. The precision in the reconstructed position of showers (see Sect. 6.4.1) and the precision of the detector acceptance and alignment were essential for the measurement.

For the High-Luminosity LHC phase (HL-LHC) the CMS collaboration is embarking on an extremely ambitious replacement of the electromagnetic part of its end-cap calorimeters. Sampling calorimeters with Si-diode readout are being developed. The total Si readout area will be 600 m2 with a total of 6 million readout and 1 million trigger channels. Remarkably, intensive R&D has demonstrated that the Si detectors will withstand the radiation load [96]. This approach will be taken one step further for detector facilities at future colliders, such as a e+e Linear Collider, with Centre of Mass energy up to several hundreds of GeV. Electromagnetic and hadronic calorimeters with extreme granularity and up to 100 million channels are being considered [97]. For such devices the use of Silicon sensors is one technology of choice. The cost of this option may be an obstacle, to be weighted against the potential performance advantages (see Sect. 6.5). In the forward direction, where the level of electromagnetic radiation from the beams is expected to be high, more radiation resistant sensors, like diamond, are being considered [98].

6.3.4 Gas Detectors

Charge collection in gases, usually followed by some degree of internal amplification, forms the basis of another important category of ionization sampling calorimetry. This method lends itself naturally to highly segmented construction and has profited from the diversified developments of gaseous position detectors (see Chap. 4). The relatively low costs of gaseous detectors favours their use in large area applications such as calorimeters for neutrino physics.

While gaseous ionization calorimetry offers several of the advantages found in ionization calorimetry with dense active materials, the low density of the gaseous readout planes—even compensated by internal charge amplification—limits the performance of such devices [29]. The low density has several disadvantages: Landau fluctuations of the energy deposit in the active gaseous layers can be comparable to the mean deposit and contribute to fluctuations at levels similar to sampling fluctuations; low-energy shower-electrons may multiple-scatter into the readout planes, where they may travel distances large compared to the gap thickness of the active layer, resulting in path-length fluctuations. These effects are relatively unimportant in dense materials, but may reach the level of Landau fluctuations in gaseous readout. Soft particles in the shower will spiral in strong magnetic fields, further increasing these path-length fluctuations. The absolute level of gas amplification depends on external operating conditions (pressure, temperature, gas composition) and is therefore difficult to control precisely. Variations of gas amplification also contribute to worsening the resolution.

An illustration is the electromagnetic calorimeter of the Aleph experiment at LEP [99]. The barrel part of the calorimeter consisted of 12 identical modules surrounding the central tracking system (a Time Projection Chamber), immersed in a solenoidal magnetic field of 1.5 T. The modules had 45 lead/wire-chamber layers for a total of 22 X 0. The cathodes of the readout chambers were segmented into pads of ~30 × 30 mm, providing energy and position information for each shower. The calorimeter was operated with a xenon-CO2 mixture to increase the density of the active medium, thus reducing pathlength fluctuations. Wires connected to the pads of each layer were brought to module edges, where they were grouped into towers pointing to the vertex. The towers were segmented in three layers in depth of 4, 9 and 9 X 0, respectively. The connections of individual pads to the module edges resulted in a large inductance and therefore limited the rise-time of the readout signals (in the μs range). This was acceptable at LEP given the low event rates. This calorimeter, segmented in 74,000 towers, had an energy resolution of σ(E)/E = 0.18/√E ⊕ 1.9%, with E expressed in GeV (due to internal amplification, the electronics noise term was negligible).

One of the weak points of this technique is the non-linearity of response. Test beam studies showed that the energy E raw recorded for electromagnetic showers needed to be corrected by:

$$ {E}_{\mathrm{corr}}={E}_{\mathrm{raw}}\left(1+0.00078\ {E}_{\mathrm{raw}}\left(\mathrm{GeV}\right)\right), $$

implying a 7.8% correction at 100 GeV. Such non-linearities affect in particular high energy jets in which several showers may be superimposed, thus affecting the result in a way difficult to correct.

While this technique was still adequate at LEP, gas calorimeters were not considered for the LHC. With a very small cell size allowing a binary readout, they may find some application in hadronic calorimetry for the ILC (see for example [100]). An exception at the LHC concerns the very forward region in which, due to the high density of energy deposits, gas ionization chambers (ie without any amplification) are being used for specific purposes, including beam loss monitoring and luminosity measurements [101].

6.3.5 High Rate Effects and Radiation Damage

High particle rates and associated backgrounds impact both on the performance and the useful operating time of calorimeters. Radiation damage needs to be considered for the active readout material and signal processing electronics. Particle rates drive the choice of the calorimeter technology and construction.

Calorimeters with gaseous readout are particularly vulnerable to the high radiation environment due to the ageing effects associated with internal gas amplification, as discussed in Chap. 4.

Such radiation damage is essentially absent in noble liquids making this technology one of the most intrinsically radiation-hard techniques used to date. However, care has to be taken to select adequately radiation resistant components, including electronics, to limit deterioration of the performance (e.g. due to out-gassing). Particularly vulnerable are plastic insulators used in multilayer electrodes or in signal cables. Among the insulators highly resistant to radiation and suitable for calorimeter construction are polyimide (like Kapton) and PEEK. A fundamental limitation of noble liquid calorimeters are space charge effects due to the low drift speed of the positive ions (typically in the range of few cm/s at a nominal electric field around 1 kV/mm). At high incident rates these ions form locally a charged domain which effectively shields the electrons in the gaps from the externally applied field, reducing the drift velocity and thus the signal. These space charge effects are inversely proportional to the square of the detector gaps [102]. For this reason the forward calorimeters [103] of the ATLAS experiment feature gaps down to 250 μm.

Scintillators suffer from the formation of colour centres, which absorb part of the emitted light. The qualification of PbWO4 as a candidate for the CMS crystal calorimeter required a world-wide R&D programme to study the radiation damage effects and to develop methods of crystal growth improving the radiation hardness. Several impurities were identified, which affect transparency in the useful wavelength range (above 350 nm). The best radiation resistance was obtained for crystals grown in Pb/W stoechiometric conditions, with the addition of a small quantity (~100 ppm) of Nb and Y [104]. These crystals showed a light loss of ~3% after an exposure to ~10 Gy in ~10 h, corresponding to the radiation dose accumulated in calorimeters at LHC nominal luminosity during a typical operating period of 20 h. These colour centres show annealing with a recovery time of ~10 h (see also Sect. 3.1.1). After some years of data taking at the LHC, with instantaneous luminosities up to twice the nominal (i.e. 2 1034 cm−2 s−1) and close to 100 fb −1 of accumulated data at 13 TeV in the centre of mass, there is enough experience to judge the crystal behaviour, conveniently followed using laser pulses sent in turn to each crystal. At central rapidities the light loss remains small, due to effective annealing between data taking periods. Some permanent damage accumulates in the more forward region. This is illustrated in Fig. 6.38 [105].

Fig. 6.38
figure 38

Relative response of the CMS crystal calorimeter to laser light as a function of time, during the initial 5 years of LHC data taking

Radiation effects on the light transducers (APD) give an additional contribution to the electronics noise, still rather minor after the integrated luminosity quoted above.

As anticipated, the response of the ATLAS liquid argon calorimeter remains stable during LHC running. Using the position of the Z0 mass peak reconstructed from electron-positron pairs, a variation of less than 0.05% over the whole 8 TeV data taking period of the “run-I” in 2012 is observed. The peak position is also independent of the mean number μ of collisions per crossing, ie there are no significant rate effects [106] at least up to μ of order 30.

6.3.6 Calibration and Monitoring of Calorimeter Response

Modern calorimetry operates frequently at the 1% accuracy level and requires therefore appropriate calibration methods. An extraordinary effort went into the development and deployment of adequate calibration techniques for the LHC calorimeters. In general, the following tasks have to be performed:

  • establishing the absolute scale of response of a calorimeter, averaged over an entire data set

  • assessing the uniformity and linearity of response

  • monitoring the response as a function of time, locally and globally, in order to correct for time dependent effects, rate effects, aging.

A few examples are discussed below to illustrate each of these tasks.

Energy Scale

  1. (i)

    Low energy domain: one large-scale example is the Superkamiokande experiment, dedicated to low-energy neutrino interactions. After a careful calibration of the gain of each of the phototubes, and an assessment of the water transparency (absorption length greater than 100 m), the absolute energy calibration was made using two radiation sources for cross-checks:

    • the beam of an electron Linac operated in-situ above the liquid volume was sent through an evacuated beam pipe into several places of the detector volume recording the corresponding light signals. The Linac was operated at energies between 5 and 20 MeV. The absolute energy scale of the beam was known to better than 1%;

    • 16N radioactive nuclei were produced in situ from 16O nuclei of the water volume using a neutron generator. The decay products to 16O (beta emission with an endpoint energy of 4.3 MeV in coincidence with a 6.13 MeV photon) were then recorded during a few lifetimes of 16N (7.13 s). The two methods agreed to better than 0.6% rms.

  2. (ii)

    Medium energy domain: one example is the Babar experiment at SLAC, which used a CsI crystal electromagnetic calorimeter and employed three calibration sources to cover the full energy range:

    • at low energy, the 6.13 MeV photons of 16N decays were used (see Superkamiokande above). At this energy, the resolution of the calorimeter was found to be 5 ± 0.8%.

    • at high energy (~10 GeV) the Bhabha scattering was used. With a luminosity of 3·1033 cm−2 s−1 this reaction provided about 200 events per crystal in a 12 h run.

    • finally the peak position of known neutral resonances decaying in two photons were used for further checks. Figure 6.39 shows the recorded γγ invariant mass spectrum. The π0 peak was observed at the nominal mass of 135.1 MeV with a width of 6.9 MeV.

    • Bhabha scattering was used to calibrate the electromagnetic calorimeters of the four LEP experiments.

Fig. 6.39
figure 39

Invariant mass of two photons in B¯B events recorded in Babar. The position of the π0 peak provides the reference for the energy scale

  1. (iii)

    High energy domain: At the Tevatron the energy scale of the electromagnetic calorimeters was set using the precisely known mass of the Z0 (M Z = 91,188 ± 2 MeV) decaying into e+e pairs. The LHC experiments rely heavily on this approach given the high rate of Z0 production: about 10 millions reconstructed Z0 decays to e+e were used by ATLAS and CMS to establish the energy scale of their electromagnetic calorimeter for the “run-I” at 7 and 8 TeV [106, 107]. The high-accuracy calibration of the electromagnetic calorimeter is essential for precision measurements (at the level of a few tens of MeVs) of the W mass [108] in the eν decay mode, and for the measurement of the mass of the recently discovered Higgs boson, using decays in 2 photons, and in 4 leptons [109].

Uniformity and Linearity

With large enough statistics, the Z0 mass constraint can be used to rescale in situ the response of an LHC calorimeter sector by sector and to improve its uniformity of response. ATLAS uses this method after dividing the calorimeter in about 30 slices in η. The residual non-uniformity is about 0.8% in the barrel region, being somewhat worse (up to about 3% locally) in the end-cap region [106].

If the amount of material in the magnetic spectrometer in front of the calorimeter is low enough, the relation between the energy measured in the calorimeter and the momentum measured in the spectrometer (E/p constraint) can be used to assess both the uniformity and the linearity of response of the calorimeter. A correspondingly high precision mapping of the magnetic field in the spectrometer is of course needed. This technique was used with success in the NA48 experiment with a large sample of Ke3 decays, demonstrating a linearity better than ±5·10−4 between 10 and 80 GeV, see Fig. 6.40. At the LHC the amount of material in the tracking volume is too large to get the best of this technique. Instead, the large sample of J/ψ decays in electron-positron pairs allows to assess the linearity of the electromagnetic calorimeters between ~5 GeV (high-pT J/ψ are used in order to have a selective enough trigger) and ~ 50 GeV [107, 110]. An excellent linearity (± 1·10−3 between 20 and 180 GeV) was also demonstrated-locally-for ATLAS lead-liquid argon calorimeter modules exposed to a specially equipped beam line at CERN, used as a precision spectrometer (see Sect. 6.7.4).

Fig. 6.40
figure 40

Linearity of the NA48 homogeneous krypton calorimeter. The term added (45 MeV) corresponds to the average energy loss of electrons in the material preceding the sensitive volume

Monitoring of Short Term Effects

In some cases the calorimeter response is subject to time dependent effects, on a time scale too short to allow for correction with the recorded physics data itself. External monitoring is in this case necessary. An example is the laser monitoring of the CMS crystal calorimeter designed to follow the light absorption and recovery as a function of the instantaneous luminosity, as discussed above, and shown in Fig. 6.38.

In many cases, the detector response depends on operating conditions. As an example, the energy response of the ATLAS liquid argon calorimeter depends on the temperature of the liquid bath with a coefficient of −2% per degree. Precision thermometers (Pt100 resistances) are used to follow the temperature with a precision better than 50 mK. Given the temperature stability observed no short-term correction was required. In all precision experiments, the gain of the front-end electronics is monitored by injecting precision electrical pulses, allowing subsequent corrections to be made with a precision of 10−3 or better.

6.4 Auxiliary Measurements

The analysis of shower properties provides important additional information on position, angular direction and arrival time of the particles which initiated them. Shower shape analysis gives insight on the particle nature. The efforts lavished by the LHC collaborations on electron and muon identification and spectroscopy are eloquent testimony.

6.4.1 Position and Angular Measurements

Conceptually, two methods can be used to obtain spatial information: transverse and longitudinal granularity of the instrument on a scale smaller than the characteristic showers sizes gives position and direction by ‘design’. Alternatively, if the readout volume is far larger than the shower dimensions, spatial information may be obtained by ‘triangulation’ using signals from several sensors distributed over the outer surface of the calorimeter volume.

The latter approach is used for calorimeters with large sensitive volume read out by photomultipliers distributed over their surface (e.g. Superkamiokande). The position is obtained by measuring the difference of light arrival times at the photomultipliers. With a timing resolution between 1 and 3 ns (depending on the pulse height) a position resolution of 70 cm is obtained for 10 MeV showers inside the sensitive volume.

In calorimeters with a more classical tower structure, the position of the incident particle is obtained by calculating the energy-weighted barycentre of energy deposition, using a cluster of cells around the local maximum energy deposition. Because of the finite size of the cells as compared to the Molière radius, the barycentre position is biased towards the centre of the cell with the largest energy deposition. This systematic bias can be corrected by fitting empirical functions. After applying this correction the position accuracy scales as 1/√E (decrease of shower fluctuations with increasing energy) convoluted with a constant and a noise term.

In the homogeneous NA48 krypton calorimeter (2 × 2 cm cells) a position resolution σx,y = (4.2/√E(GeV) ⊕ 0.6) mm was measured, while the Babar CsI crystal calorimeter (4x4 cm crystals) gave slightly better results (3.2 mm/√E(GeV). This difference is explained by the smaller Molière radius of CsI (3.8 cm, against 5.5 cm for liquid krypton) and larger signal to noise ratio.

Segmented calorimeters, especially sampling calorimeters with ionization readout, allow lateral and longitudinal segmentation. With two or more samplings in depth the direction of photon showers can then be estimated. As is shown in Fig. 6.13, the shower is particularly narrow and already well developed after~5 X 0; it is thus advantageous to sample it with high granularity over this depth. In ATLAS, with a cell size of ~5 mm the position of electron and photon showers is determined in the first ~5 X 0 (above~30 GeV) with an accuracy of about 300 μm, a critical asset for physics at the LHC. An important example is the discovery for the Higgs boson using the two-photon final state. The ATLAS electromagnetic calorimeter has three longitudinal samplings for measuring the direction of photons with an accuracy of about 50 mr/√E. This angular resolution is such that it makes a negligible contribution to the Higgs mass resolution [111], even if the interaction point cannot be identified among the numerous primary collision vertices at high luminosity. Search for new long-lived neutral particles decaying into photons (like gravitinos) also benefit from a high-resolution angular measurement.

6.4.2 Timing

The electromagnetic cascade develops at the sub-nanosecond timescale, allowing accurate timing measure-ements. This measurement allows identifying the bunch crossing associated to a particular event at colliders. Timing may be used to infer the shower position (see Sect. 6.4.1) or may discriminate between relativistic electromagnetic and slow particles, such as antineutrons.

In a segmented calorimeter the timing resolution is limited by fluctuations of the light path reflecting on edges of the tower, in case of light readout, or by electrical signal reflections at the ends of tower electrodes in case of ionization readout. Electronics noise and shower fluctuations introduce a further limitation, dominant at low and medium energies. While the energy in a tower can be obtained by sampling the signal at its maximum, the optimal time measurement requires additional signal processing. Constant fraction discriminators or digital treatment of multiple samplings of the signal (also beneficial for energy measurements) are frequently used. The shaping time of the electronics is a critical parameter in optimizing the timing accuracy.

As an example, the homogeneous NA48 krypton calorimeter showed a resolution of σ = 0.5 ns/√E, up to ~100 GeV. With the light readout in the “spaghetti” lead-fiber sampling calorimeter of KLOE [82] a spectacular resolution of 0.054 ns/√E ⊕ 0.14 ns was obtained for photons between 50 and 300 MeV, allowing the shower barycentre along the spaghetti bar structure to be located with a precision of ~3 cm.

With a time resolution better than 100 ps, vertex localisation becomes possible, with an accuracy of a few cm. At the LHC, the rms spread of collision vertices along the beam axis is about 5 cm or ~180 ps. At high luminosity when 50 to 200 collisions per bunch crossing are observed, or envisaged (in the case of HL-LHC), a significantly better resolution is required in order to help in the vertex selection. Upgrade projects at HL-LHC are aiming at 30 ps, which seems the best possible value with the technology available or under development. One of the most advanced projects is the High Granularity Calorimeter (HGCal) replacement of the crystal system in the endcaps of CMS [96]. In the dense core of the early part of the shower, the signal to noise ratio and the intrinsic shower fluctuations are such that a ~20 ps resolution has been obtained with Si diodes. A similar precision could possibly be reached for non-showering particles (mips) by using “low gain avalanche diodes” as developed and tested by several groups [112, 113].

For hadronic showers, the time development of the energetic component of the cascade is of the order of tenths of nanoseconds, whereas the thermal neutron capture may extend up to 1 μs. Nevertheless, typical time resolutions are found to be at the level of 1–2 ns/√E. As an example, with multiple digital sampling a time resolution of σ =1.5 ns/√E is measured in the ATLAS Tile Calorimeter [114]. The different time evolution of electromagnetic and hadronic showers offer interesting possibilities for improved shower treatment, a feature likely to be exploited at future facilities (see 6.7.6.2).

6.4.3 Electron and Photon Identification

Apart from certain final states easily identified, like Bhabha scattering at e+e machines, electrons and photons are in general buried inside the copious production of hadrons or jets. This is particularly true at hadron colliders where the electron/hadron ratio ranges from 10−3 to 10−5. Since electrons and photons are often signatures of interesting physics, their identification at the trigger and analysis level is crucial. The basic criterion for electromagnetic shower identification is the transverse and longitudinal shower shape, restricting em showers to the electromagnetic compartment, as opposed to hadrons and jets depositing energy in the full calorimeter. This condition is easy to implement, already at the trigger level. Comparing shower shape parameters in the electromagnetic compartment (width, length) to pre-programmed patterns provides the needed additional discrimination. Further discrimination is obtained by treating electrons and photons separately. An electron is signed by a charged track pointing to the shower barycentre, with a momentum p compatible with the calorimetric energy E. The rejection power of this E/p test is however compromised when the electron starts to shower in the tracking device in front of the calorimeter, distorting the momentum measurement and possibly the calorimetric measurement. The remaining background is dominated by π0s overlapping with a charged pion. A photon is identified through the absence of a track pointing to its barycentre. At this stage the background for photons is often dominated by a π0 decaying into close-by photons. Very fine granularity in the first ~5 X 0 is one approach to reject these π0s. As an illustrative figure, simulations made for the ATLAS experiment, give a rejection factor of jets of about 3000 (for a photon acceptance of 80%), when studying the γ + jet final state as a possible background to the γγ reaction, with photon energies around 50 GeV [115]. For certain physics reactions an ‘isolation criterion’-absence of tracks above a certain p T, nor calorimeter energy in a cone around the electromagnetic shower can be applied to sharpen photon or electron identification. This criterion does not apply e.g. for electrons resulting from heavy quark decays inside a heavy quark jet.

The Higgs boson discovery in the di-photon mode was a brilliant demonstration that the necessary jet rejection was achieved by both ATLAS and CMS experiments. At an invariant mass of the Higgs boson of about 125 GeV, the di-photon continuous background consists of about 75% prompt di-photons, 20% photon-jet background and about 5% jet-jet background.

Samples of electrons-positrons with an invariant mass around the Z0 mass allow a clean measurement of the electron sample purity, as well as of the selection efficiency, using the “tag and probe” method, see Refs. [107, 110] for details.

6.4.4 Muon Identification

The registration of muons in calorimeters contributes to their identification, provides an important means of cross-calibration and in-situ monitoring of calorimeter cells and is used to improve the quality of the muon spectroscopy for instruments located behind the calorimeter.

Identification relies on the reconstruction of a penetrating, charged track behind the hadron calorimeter and possibly on the measurement of an energy deposit in the calorimeter cells along the path of the muon. Typical most probable energy deposits in an electromagnetic calorimeter (e.g. the CMS PbWO4 calorimeter or the ATLAS Accordion) are of order 300 MeV, whereas in the hadronic calorimeters several GeVs are deposited. Such values are in general large compared to electronic noise and to energy deposits from particle background. In the ATLAS hadron calorimeter muons deposit more than ten times the energy from particle background due to average inelastic collisions, even in case of event pile-up at the highest collision rates.

Identification and triggering on muons based on calorimeter information is an essential complement to the main muon trigger using tracking chambers, for physics reactions producing low-p T muons, e.g. tagging c- or b-jets, or detecting J/ψ or Y; production.

Muons are abundantly produced in pp. collisions (see Fig. 6.41). At low p T the rate is dominated by ‘punch-through’ particles, i.e. hadrons, which have not interacted in the calorimeter. At high p T prompt muons (in particular from W decay) become dominant. [116].

Fig. 6.41
figure 41

Estimated muon spectra from various sources in the ATLAS Muon Spectrometer

6.5 Jets and Missing Energy

Jet spectroscopy and the related signature of ‘Missing Transverse Energy’ (MET) have contributed to major discoveries (gluon, W-boson, top quark, …). At LHC, MET is a key signature, e.g. for SUSY and/or dark matter searches. Very high-performance jet spectroscopy is also one of the principal design considerations for future Collider Detectors. The resolution and linearity of the jet energy reconstruction is the principle performance criterion.

The measured jet energy has to be related to the corresponding parton (quark, gluon) energy in a sequence of complex steps. Initial and final state gluon radiation and parton fragmentation affect the observable particle composition and momenta in the jet, limiting the ‘intrinsic’ parton energy resolution to order σ(E parton)/E ≈ 0.5/√E parton(GeV) [117]. Experimental factors—different response as a function of particle species and momentum, nonlinearities, insensitive detector areas, signal noise, magnetic field—require large corrections. Finally, jets are not uniquely defined objects. Different procedures are used to attribute a particle to a given jet. The choice of ‘jet algorithms’ influences the energy attributed to the jet, as do the additional particles in the ‘underlying’ event or particles from other collisions, recorded with the jet (‘pile-up’) [117, 118]. Two classes of jet algorithms have been widely used: The cone-algorithm draws a cone in the η-φ space with radius R = √ [(Δφ)2 + (Δη)2] around a ‘seed’, an energy deposit above a certain threshold, calculates the total transverse energy E T = ∑E Tparticles and the E T position and iterates around the new cone position until a stable result is obtained. This algorithm is sensitive to soft radiation effects; its well-defined jet-boundary however eases corrections due to the underlying event produced in the hadron collision. The k T—algorithm clusters particles according to their relative transverse momenta over the η-φ space, controlled by a size parameter D. This algorithm is theoretically attractive, because in principle infrared and collinear safe, but results in irregular jet boundaries and complicates the underlying event corrections. Recent work [119] has given rise to an improved version, the anti-kT algorithm, which is safe against infrared and collinear divergences of QCD, and has regular boundaries. This algorithm is now the “default” of most LHC analyses using jets. Remarkably, despite the complexity and magnitude of the experimental corrections, modern analyses (and Monte Carlos) achieve experimental jet resolutions comparable to (sometimes even better than) the resolution measured for single hadrons: σ(E jet)/E) ≈ α/√ ∑E particles(GeV) ⊕ c, where ∑E particles represent the energy of the particles associated with the jet and where α is close to the stochastic and c close to the constant term measured for single hadrons [120, 121].

Within a jet, the electromagnetic part—coming mostly from π0 decays—is better reconstructed than the charged hadrons—mostly π± and K± or long-lived neutral hadrons (K0 L, n, Λ,…). While the latter are only detected in the hadronic calorimeter, modern algorithms aim to “replace” charged hadrons reconstructed in the hadronic calorimeter by the associated charged track, whose momentum is better reconstructed than the calorimeter energy. While this individual replacement of particles requires complex algorithms, the procedure has been constantly improved, giving rise to “particle flow” algorithms (see Sect. 6.2.9) which are alternatives to jet reconstruction from calorimeters alone. CMS [122] in general prefers the more performant “particle flow” rather than calorimeter reconstruction. Particle flow is well suited for algorithms analyzing a substructure within jets in view, for example, of distinguishing between jets originating from a high pT W or Z from quark or gluon jets [113].

The jet energy scale can be experimentally validated studying specific final states in which the jet is balanced by a well measured object, such as γ + jet(s) or Z + jet(s). Another powerful constraint is provided by W’s decaying into two jets. A convenient source for identified Ws is the ttbar final state, abundantly produced at the LHC. In the pT range from 30 GeV to 300 GeV, the linearity of the jet energy scale over the whole angular range is better than 2% in both experiments [123, 124].

The measurement of MET’ is the only way to infer the production of neutrinos or weakly interacting SUSY-type particles. It is defined as the negative vector sum of the momentum of all reconstructed objects (leptons, photons, jets) in an event, projected onto the plane transverse to the collision direction. In general, a “soft term” is added corresponding to tracks or energy deposits not associated to the reconstructed objects. At high luminosity, in order to avoid unwanted contributions from pile-up, only tracks are considered, because of their unambiguous association with the corresponding collision vertex. Empirically, a MET resolution of σ(E missing)/E ≈ 0.7α/√∑E Tparticles(GeV) is observed (at low luminosity) for soft collisions with α expressing the stochastic term for single hadron resolution. Calorimetric systems with an acceptance of at least |η| ~5 and very good ‘hermeticity’ are required to achieve this performance.

For events with high p T jets, at high luminosity and after adequate corrections for the contribution of the underlying event, and of residual pile-up, the resolution is only weakly increasing with the number of collisions during the relevant bunch crossing, and is comparable to the level of the single hadronic particle resolution [125, 126].

6.6 Triggering with Calorimeters

The ability of calorimeters to provide rapidly (order 100 ns) information on the energy distribution of the collisions products is one of the major assets of this technique. In the very rich trigger ‘menu’ of the LHC experiments all but muon physics is based on calorimetric triggers at the first trigger level L1. The calorimeter trigger provides a selectivity of ~10−3 and reduces the 40 MHz bunch collisions rate accordingly. A ‘Sliding Window’ technique is used to search for local energy topologies in the Δη × Δφ transverse energy distribution. The optimum window size depends on the particle type (photons, electrons or jets), on their threshold, the depth of the calorimeter included in the sum and possibly luminosity. More complex topologies requiring isolated energy clusters (e.g. triggering on isolated photons or electrons) are also used. The L1 trigger is implemented with dedicated hardware processors. The trigger decision time or “latency” of its response is fixed, and is typically a few μs. The information contained in all detectors is “pipelined” during this time, in such a way that no dead time is generated by the L1 trigger. In subsequent stages, called “high-level-trigger” (HLT) selection criteria and energy thresholds are sharpened with software-based algorithms. The treatment during these phases is asynchronous, and many processors (up to thousands) work in parallel. One of the main challenges with the trigger systems is to allow recording W and Z leptonic decays (i.e. with transverse momenta thresholds below ~30 GeV) for calibration purposes, and for electroweak physics, without saturating the bandwidth of the data acquisition systems. As luminosity increases, refinements are necessary to meet this requirement. MET and B-tagging are part of the overall menu of the HLT, in which of the order of one thousand different conditions are examined in parallel. Triggers on hadronic decay modes of τs, which rely on narrow hadronic jets in the calorimeters are also implemented in HLT. See Ref [127] as example for ATLAS.

In LHCb, which addresses heavy flavour physics in the pseudorapidity range between 2 and 5, the transverse momentum thresholds are much lower, typically 3 GeV for both the electron and the hadron trigger. Such low thresholds are made possible due to the lower luminosity operation of the experiment (typically 0.4 1033 cm−2 s−1) and the high data acquisition rate (up to 1 MHz). See Ref [128] for details.

6.7 Examples of Calorimeters and Calorimeter Facilities

The development of calorimetric facilities was and continues to be driven by the main directions of particle physics. Not surprisingly, as particle physics had its origin in cosmic ray studies, rather crude hadronic sampling calorimeters were successfully used to measure the energy spectrum of cosmic rays [52]. Electron scattering experiments provided the impetus for the development of homogeneous [129] and sampling [130] electromagnetic calorimeters. A major step in understanding and perfecting hadronic sampling calorimeters was made for the study of hadron scattering experiments, both with protons and neutrons [131]. The basic properties of these instruments were derived and Monte Carlo studies helped to optimize them [132]. The ISR provided the next motivation for a major development effort [35], providing the basis for the calorimeter facilities at Fermilab, HERA and LHC. In parallel, equally innovative calorimeter developments were and are initiated for astro-particle physics.

The recent series of CP-violation experiments in neutral kaon decay has pushed the requirements for electromagnetic calorimetry (Sect. 6.3.3). The LEP physics program emphasized charged particle spectroscopy and identification, with one notable exception, the L3 electromagnetic BGO crystal calorimeter (Sect. 6.3.1) and U/gas hadron calorimeter. For the Fermilab Collider program general purpose electromagnetic and hadronic calorimeter facilities were developed; facilities with new levels of performance were required for HERA, motivated by the need for precision jet spectroscopy (Sect. 6.7.5).

The LHC physics needs state-of-the-art electromagnetic and hadronic calorimetry, optimized for photons at the 100 GeV scale and for jets at the TeV scale, posing challenging system questions, answered in novel and unconventional ways (Sects. 6.7.3 and 6.7.6.1). The Future Collider physics programmes require further performance improvements, particularly concerning jet spectroscopy, exploiting at the same time the specific operation environment (Sects. 6.7.6.2).

6.7.1 The MEG Noble Liquid Homogeneous Calorimeter with Light Readout

The MEG experiment at PSI [73] is dedicated to the search for lepton flavour violation in muon decays. It aimed at a sensitivity for μ → eγ decays of 10−13. This requires an outstanding background rejection (for example of the reaction μ → eννγ), requiring a calorimeter with an excellent energy resolution for ~50 MeV photons and a sub-ns response to cope with the high rate.

The half-cylinder shaped calorimeter is shown in Fig. 6.42. It contains 800 litres of liquid Xenon, and is read out by 846 PMTs, covering approximately 30% of the outside surface of the detector volume.

Fig. 6.42
figure 42

The MEG homogeneous xenon calorimeter during assembly

The PMTs have K-Cs-Sb photocathodes and silica entrance windows transparent to the peak of light emission (175 nm) of liquid xenon.

The detector was optimized for events with a single photon shower in the volume. An interesting technical feature is the construction of the front wall cryostat using a honeycomb technique for better transparency to photons.

High purity (at the ppb level) of the liquid is necessary to prevent absorption of UV photons by contaminants like oxygen and water. The measured absorption length, more than 3 meters, is much longer than the typical light path from emission to the PMTs. The PMT signals are digitized at 2 GHz with a 12 bit accuracy using custom designed electronics.

The energy scale of the calorimeter is calibrated with photons (17.6 MeV) from the Li(p,γ)Be reaction obtained by sending protons from a Cockroft-Walton source to a Li target close to the calorimeter. In addition, photons from π0 decays produced by π hitting a LiF target are also used, with one photon being measured in the Xe calorimeter, and the other one in an auxiliary NaI crystal matrix.

The relative energy resolution at 50 MeV is σ(E)/E = 1.3%. The position resolution is ~6 mm and the timing resolution 64 ps. This excellent performance, made possible with this innovative technique, matched the demanding requirements of the experiment.

An upper limit branching ratio of muons decaying to eγ of 4.2 × 10−13 has been published in 2016 [133], based on the total statistics of 7 1014 muons stopped in the target. This is the best limit so far. A plan has been put forward and accepted to pursue the experiment with various improvements, and a higher flux of stopping muons. The liquid Xenon calorimeter is kept, but the PMTs are replaced by VUV sensitive SiPMs, with a size of 12 × 12 mm2, in order to improve the photon energy and position resolution. The prospect is to reach a sensitivity of 5 10−14 [134].

6.7.2 The Xenon 1T Experiment

Xenon1T is the largest and most recent detector of a generation of xenon detectors optimized for the detection of nuclear recoils of very low energy (below 100 keV), as could be produced by the scattering of a WIMP on nuclei (xenon in this case). Observation of such recoils, if they were to be produced, requires high accuracy of the energy measurement and very low background. The detector, operated as a dual phase TPC, is sketched in Fig. 6.43 [135]. The sensitive volume is a vertical cylinder of about 1 m diameter and 1 m height. As described in Sect. 6.2.3 both the primary scintillation signal and the ionization signal are exploited.

Fig. 6.43
figure 43

Sketch of the Xenon-1 T detector

The ionization electrons are first drifted to the surface by an electric field generated by a set of Copper rings at a linearly decreasing potential from a grounded grid under the surface to bottom. The field intensity is about 12 kV/m. Right above the surface a somewhat higher field accelerates the primary ionization electrons in such a way that they in turn excite (providing secondary photons) and ionize the surrounding gas. Both the primary and secondary photons are detected by a set of 248 VUV photomultipliers, with 78 mm diameter and 35% quantum efficiency at 175 nm, disposed in the liquid at the bottom of the vessel, and in the gas above the multiplication region. The light distribution in the top and in the bottom circles gives the position and lateral extension of the emitted signal. The time between the primary and secondary signals gives the vertical coordinate. All construction materials of the detector were selected for low radioactivity. The experiment is operated in the LNGS laboratory near the Gran-Sasso tunnel, shielded from cosmic background. It is furthermore enclosed in several layers of passive and active shielding. The remaining background is dominated by electron recoils from residual γ emitters, and nuclear recoils from residual neutron background. The former are very much suppressed by a requirement on the ratio of ionization over primary scintillation. The electron lifetime, which depends critically on the extreme liquid purity, and affects the magnitude of the ionization, is measured with photon to electron conversion signals generated in the liquid. A neutron generator is used to calibrate the energy response to recoils. The PMTs and electronics chain are calibrated with blue light pulses sent in fibers ending in the liquid volume. The dark count rate of the PMTs during the first science run was about 10 to 20 Hz. A first science run of about 30 days demonstrated that Xenon1T is the most sensitive device for WIMP masses above 10 GeV presently running. A science run of two years is planned. An enlarged version of the detector, Xenon-nT, with 8 tons fiducial volume is under construction. Its sensitivity should allow to approach the “neutrino floor” given by coherent scattering of solar neutrinos on nuclei.

6.7.3 The CMS Electromagnetic Crystal Calorimeter

The largest crystal calorimeter operated so far is the PbW04 calorimeter of the CMS experiment at the CERN LHC [110], clearly aimed at the Higgs → γγ discovery. The calorimeter consists of a cylindrical barrel part (inner radius ~ 1.3 m) and two planar end-caps closing the cylinder at about 3 m from the proton-proton collision point (see Fig. 6.44). Each of the 61,200 barrel crystals is a tapered bar covering a Δϕ × δη solid angle of 0.018 × 0.018, and has a depth of 23 cm (24.7 X 0). In the end-caps, the calorimeter is preceded by a lead-Silicon strip preshower. Basic properties of PbW04 have been given in Sect. 6.3.1.

Fig. 6.44
figure 44

Layout of the CMS electromagnetic calorimeter, showing the arrangement of crystals, with the preshower in front of the end-caps

The calorimeter is located inside the hadronic calorimeter, which in turn is inside the 3.8 T superconducting solenoid. Barrel crystals are readout by APDs, while the end-cap crystals (somewhat bigger) are readout by phototriodes chosen for their better radiation resistance.

The front-end electronics processes signals corresponding to energy deposits of up to ~1.5 TeV (3.0 TeV) in the barrel (end-caps). The equivalent noise per crystal is ~30 MeV. This figure is likely to increase after high luminosity running, due to increased leakage current in the APDs.

Despite stringent quality controls during the crystal production, the particle response as observed in beam tests, showed an unavoidable crystal-to-crystal response dispersion of about 7% rms. Two calibration campaigns with beam test and cosmics were undertaken to establish the calibration constants for the initial LHC operation. Using various tools available at the LHC, like azimuthal uniformity of response, π0, J/Ψ and Z0 invariant mass constraints, all crystals were quickly intercalibrated to a precision around 1%. The laser pulse system monitors the short term response variations due to radiation effects.

The CMS crystal calorimeter successfully achieved its essential role for the experiment, both for triggering, as the source of identification and precise measurement of electrons and photons, and as input to particle flow. Among the most important results, based in particular on the calorimeter data, is the already mentioned discovery of the Higgs boson in 2012, revealed in the inclusive di-photon spectrum shown in Fig. 6.45.

Fig. 6.45
figure 45

Inclusive di-photon mass spectrum in CMS, from the Higgs discovery paper

6.7.4 The ATLAS Liquid Argon Electromagnetic Calorimeter

While ATLAS and CMS have almost identical physics programs, with the search for the Higgs boson as one of the main objectives, the two experiments have opted for a series of different detection techniques. The ATLAS electromagnetic calorimeter [103] uses a lead/liquid argon sampling technique, with an ‘accordion’ geometry, and is located outside of the inner solenoid. The liquid argon technique was chosen for its immunity to radiation, its intrinsic stability and linearity of response, and its relative ease of longitudinal and transverse segmentation. Its more modest intrinsic resolution is a limiting factor at medium and low energies.

The calorimeter features three segments in depth, the first one having an extremely fine segmentation in pseudorapidity (0.003) to allow separation between prompt photons and photons from π0 decays up to p T ~ 70 GeV/c, the interesting range for the Higgs boson search in the γγ decay mode.

The calorimeter is preceded by a presampler, located in the same cryostat, to correct for the loss of energy of electrons and converted photons in the inner detector material, in the solenoid and cryostat front walls (see Table 6.5). The barrel part, consisting of two cylinders, and the two end-cap wheels provide uniform azimuthal coverage despite being built of 16 (8) modules per cylinder (wheel) (Fig. 6.46).

Fig. 6.46
figure 46

Photograph taken during the assembly of the ATLAS electromagnetic barrel calorimeter. The pre-sampler sectors (in gray) are visible in front of the 16 calorimeter modules

Fig. 6.47
figure 47

Inclusive diphoton mass spectrum in ATLAS, from the Higgs discovery paper

The front-end electronics was optimized (Fig. 6.36) for best performance at the nominal LHC luminosity of 1034 cm−2 s−1. The dynamic range is covered with three channels with gains in the ratio 1/9/81, digitized with 12 bit resolution. In this way quantization noise remains small compared to the noise level after the preamplifier (10 to 50 MeV depending on the sampling) up to the highest expected energy deposition per cell (~3 TeV). Trigger towers of size \( \Delta {\eta} \times \Delta {\varphi} =0.1\times 0.1 \) are built by analogue summing of signals at the front-end level, followed by digitization at 40 MHz with 10 bits ADCs (sensitivity of 1 GeV per count).

The uniformity of response within one module and the reproducibility from module to module were checked in a test beam. The overall dispersion of energy measurements in 3 barrel modules and 3 end-cap modules was respectively 0.43% and 0.62% [136]. The local energy resolution was found to be about 1% (rms) at 120 GeV [94], and is well described by σ(E)/E = 10%/√E ⊕ 0.25/E ⊕ 0.003. The energy scale (Sect. 6.3.6) and the long range uniformity have been assessed in situ using the Z mass constraint. An overall “constant term” of about 0.8% in the barrel and up to 3% in some pseudorapidity ranges of the end-caps covers the unavoidable dispersion in materials and in calibration, and the effects of material in front of the calorimeter not fully described in the simulation. Like for CMS, the electromagnetic calorimeter of ATLAS fulfilled successfully its task. Among the most important results, based in particular on the calorimeter data, is the already mentioned discovery of the Higgs boson in 2012. The corresponding inclusive di-photon spectrum is shown in Fig. 6.47. Also worth mentioning is the contribution of the electron channel to the recent measurement of the W-mass, 80,370 ± 19 MeV in the muon and electron channels together [137].

6.7.5 The ZEUS Calorimeter at HERA

Research at the electron-proton collider HERA required precision jet spectroscopy at the 100 GeV level to study the underlying dynamics of e-quark collisions. Energy and position resolution for jets were at a premium.

Fig. 6.48
figure 48

View of a module of the ZEUS U-scintillator calorimeter. Wavelength-shifter readout is used to read cells of 5∗20 cm2 cross-section in the electro-magnetic compartment and of 20∗20 cm2 in the two subsequent hadronic compartments [138]

The H1 Collaboration developed a calorimeter based on the LAr-Pb and LAr-Fe sampling technology. A certain level of ‘off-line’ compensation was achieved because hadron showers were measured longitudinally up to ten times and longitudinal shower-weighting could be applied [139].

The ZEUS Collaboration at HERA developed an intrinsically compensated calorimeter using the U-scintillator sampling technique [43, 138], modeled after the Axial Field Spectrometer facility [140]. The calorimeter is constructed in a modular form (Fig. 6.48), with units which are approximately 5 m long, 20 cm wide and more than 2 m deep. The ratio of the thickness of the 238U plates (3.3 mm) to the scintillator plates (2.6 mm) was tuned to achieve e/π = 1, confirmed by measurements to be e/π = 1.00 ± 0.03. The measured hadronic energy resolution, σ (E)/E (hadrons) = 0.35/√E(GeV), is consistent with a sampling resolution of σ/E (sampling, hadrons) ≈ 0.29/√E(GeV) and an intrinsic resolution of σ/E (intrinsic, hadrons) ≈ 0.20/√E(GeV). This sampling frequency is rather coarse for electrons resulting in an electron energy resolution σ/E (electrons) = 0.18/√E(GeV).

H1 and Zeus provided a detailed measurement of electron-nucleon scattering from which a new generation of parton distribution functions (PDFs) was derived. These functions have been used, and are still being used extensively for LHC physics analysis.

6.7.6 Facilities at the LHC and a Future Collider

The research programmes at the LHC and at a possible future Colliders impose a new level of performance requirements.

6.7.6.1 Facilities at LHC

The two general-purpose p-p experiments, ATLAS and CMS, have developed rather different approaches for the same physics research, promoted by different groups of physicists with their personal experience, background and taste, constrained by realities of funding. In both cases the extraordinary requirements on electromagnetic calorimetry imposed ‘hybrid’ solutions to allow independent optimization of electromagnetic and hadronic calorimetry. This ‘independence’ led ATLAS to choose two novel, unconventional detector geometries. The ‘Accordion’ calorimeter (see Sect. 6.7.4) is followed by a hadronic instrument with scintillator tile/WLS fibre readout. One of the 64 slices forming a complete and crack-less cylinder is shown in Fig. 6.49. The unconventional geometry of absorber plates and scintillating tiles oriented along the direction of the incident particle permits an economic construction and homogeneous sensitivity [141]. This geometry works because the preceding ~1.5 λ Accordion calorimeter provides enough hadronic shower development to permit good sampling in the Tile-geometry. This arrangement also greatly facilitates longitudinal and transverse segmentation hence permitting effective longitudinal weighting of the shower energies. Weighting leads to a resolution of the combined calorimetry system (accordion and Tile calorimeter) of σ/E ≈ (0.52/√E ⊕ 1.6/E) ⊕ 0.03 and a good linearity of response [120]. A jet energy resolution of σ(jet)/E ≈ 0.6/√E(GeV) is estimated, adequate for LHC. The ATLAS Tile and Extended Tile calorimeter covers |η| < 1.4. For the forward (‘endcap’) regions (1.4 < η < 3.2) ATLAS had to adopt different solutions to cope with the even more ferocious radiation levels. An Accordion-type electromagnetic calorimeter precedes a Cu/Liquid Argon hadron calorimeter. In the very forward region (3 < η < 5) yet another novel geometry had to be invented: cylindrical readout elements with narrow LAr-gaps (0.25 to 0.35 mm) as sensitive medium are embedded in a tungsten absorber, sampling geometrically very tight showers at adequate readout speeds [120]. Figure 6.50 shows a cut-view through the ATLAS calorimeter facility.

Fig. 6.49
figure 49

View of one module of the ATLAS hadronic barrel calorimeter. Sixty-four such modules complete the cylindrical detector. Each of the longitudinally oriented scintillating tiles is instrumented with two wavelength-shifting fibers [141]

Fig. 6.50
figure 50

Longitudinal quarter view of the Atlas calori-meter facility. The outer radius is at 4.2 m; it extends along the beam direction to ±7 m. Auxiliary instru-mentation in the gap between the calorimeters allows energy correction for the non-instrumented zones [120]

CMS calorimetry consists of the novel PbWO4 electromagnetic calorimeter (Sects. 6.3.1 and 6.7.3) followed by a brass (70% Cu, 30% Zn) (50 mm thick) plate/scintillator tile calorimeter. The tiles are optically grouped into towers (0.087 × 0.087 in η-φ space in the barrel calorimeter) and read by hybrid photodetectors, all located in front of the 3.8 T superconducting solenoid. This favourable geometry, however, only allows for a total of ~7 λ, requiring a ‘tail catcher’ formed by scintillator tiles outside the coil in the first muon absorber layer [142]. Tables 6.4 and 6.5 summarizes the principal design parameters of the ATLAS and CMS Calorimeter Facilities.

Table 6.4 Parameters of the ATLAS and CMS electromagnetic calorimeter facilities
Table 6.5 Parameters of the ATLAS and CMS hadronic calorimeter facilities

6.7.6.2 Developments for Future Collider Calorimetry

The proposal for a future Linear e+ e collider (LC) has triggered a worldwide R&D programme for the appropriate detector technologies [143]. One direction of present R&D addresses calorimetry optimized for its physics programme, emphasizing precision electromagnetic calorimetry and very high granularity for ‘Particle Flow Analysis’ (see Sect. 6.2.9).

One promising direction is being pursued by the DREAM Collaboration [144]. DREAM (‘Dual REAdout Method’) is a concept aiming at event-by-event separate detection of the electromagnetic component through Cherenkov light and the hadronic showers through scintillation light. Timing information might provide an additional handle to disentangle the various processes (e.g. delayed nuclear photon emission). The combined information could in principle allow complete reconstruction of the shower- and jet composition. The LC jet benchmark resolution of σ/E ≈ 0.30/√E might not remain a dream [84].

The CALICE (Calorimeters for the Linear Collider Experiment) Collaboration aims at the same performance: it makes the concept of Particle Flow Analysis an integral part of the design concept of the experimental facility aiming to separately measure the momenta of the charged component, photons in the electromagnetic and neutral hadrons (n, K0) in the hadron calorimeter. The calorimeter is placed at a relatively large radius allowing the jets to open and charged and neutral particles to separate in the strong B-field. This strategy requires exceedingly high granularity (more than 108 channels) to measure the individual shower profiles [65].

Besides studies for a possible LC a vigorous programme has been intiated to undertand the physics potential and consequences for experimentation at a possible “Future Circular Collider” (FCC). A center-of-mass energy for proton-proton collisions in the 100 TeV regime is envisaged, implying a collider circumference of about 100 km. The physics research determines the peak luminosity of about 3.1035 cm−2 s−1. These key parameters shape the detector design and performance specifications, which are intensively studied [145]. The electromagnetic and hadronic calorimetry emphasizes very high granularity to cope with particle multiplicity and event pile-up, tight control of systematic effects (small constant term), very good linearity and—unsurprisingly—taming the ferocious radiation environment. The calorimeters are of the sampling type, because the stochastic term in the calorimeter performance is less an issue, given that the typical energy scales are in the TeV regime. Simulations show that rather conventional, LHC type calorimeter instrumentation will deliver the desired performance, without excluding novel developments with more “aggressive” technologies. LAr is the technology of choice, except for a possible scintillator option for the central hadron calorimetry. As an indication, the EM calorimeter could be a Pb/ LAr device, with cells sizes between 6∗6 mm2 to 20∗20 mm2 and an eightfold longitudinal subdivision. A possible geometry is shown in Fig. 6.51. Hadron calorimtry could be a scintillator/Pb/steel detector (in the central region), which would give e/h ≈ 1.1, resulting in the required good linearity and decent jet resolution, see Fig. 6.52.

Fig. 6.51
figure 51

Conceptual structure of an em calorimeter, showing the slanted absorber plates, LAr gaps and readout boards

Fig. 6.52
figure 52

Jet resolution for different hadron calorimeter configurations

While these concepts seem plausible, a closer look shows that the technical challenges are formidable… fortunately, the LHC experience provided training, motivation and encouragement.

6.8 Conclusions

During the past 40 years calorimetry has matured into a precision measurement technique, indispensable to modern particle physics experiments. The Higgs boson, cornerstone of our present understanding of matter, owes its discovery to calorimetry.

Understanding and modelling the physics processes at work in calorimetry at the 1% level has been achieved. Based on this understanding and helped by modern signal processing techniques, developments aim at characterizing the individual showers, at optimizing further particle identification and at reaching the intrinsic performance level for jet spectroscopy, needed for the next generation of precision and discovery experiments.