Keywords

1 Remembering Bruno Touschek

In 1958, at the Physics Department of La Sapienza, I was one of the five or six postgraduate students following a course on Theoretical Physics in the framework of an advanced two-year school called ‘Scuola di Perfezionamento in Fisica Nucleare’. On the first day Bruno Touschek entered the aula smoking his usual cigarette and, in few sentences and short formulas written on the blackboard, defined fields—as systems with an infinite number of degrees of freedom—and their Lagrangian density. Then he introduced a symmetry of the Lagrangian and, in few further steps, derived the existence of a conserved quantity ending with a smile: “This is Noether’s theorem”. Yet, he did not explain neither who Noether was nor when the theorem had been demonstrated [1].

I was shocked by his simple and direct way of explaining difficult subjects and my first impression was confirmed by the rest of the course. Because of his fascinating lectures—of which I still have the notebook—quantum field theory has enthralled me for the rest of my life, and I even used it in some papers published outside my main activity of experimental physicist. Moreover, in the ‘80s—at the postgraduate school of Milano University—I taught courses on ‘Particle physics and the Standard Model’ using the first edition of a very clear book written by Ian Aitchison and Anthony Hey.

Five years after my first introduction to field theory, I was working—with a small group of physicists belonging to the Physics Laboratory of the National Health Institute (ISS) in Rome—at the Frascati electron synchrotron on a new line of research in nuclear physics: the study of ‘quasi-free’ electron-proton scattering on nuclei. During a coffee break of the ‘Congressino dell’INFN’, Bruno approached me. In my diary I wrote “Touschek mi chiede se voglio prendere l’incarico di preparare un’esperienza in una delle sezioni dritte di Adone. Rispondo che ci penserò. Tornando a Roma parlo a Giorgio Matthiae di questo.” He wanted me to perform an experiment at the electron–positron collider Adone, at the time under construction.

In 1968, also because of Bruno’s request, I had changed research field, an important step in my professional life. We were preparing—together with Giorgio Matthiae and some junior collaborators—an Adone experiment to study the production and decay of phi-mesons. Since, to compute radiative corrections to this novel process, I had used a method developed by the French theorist Paul Kessler, I went to Bruno’s office at la Sapienza to show him the results. I asked him “Do you know Kessler?” His instantaneous reply was: “Io conosco solo le sorelle Kessler” i.e. “I only know the Kessler sisters”, two tall and beautiful German twins, dancers and singers often seen on the main channel of the Italian TV.

In 1977 Bruno was at CERN, where I was then working—having moved in 1973 from Rome to Geneva—and where he was participating in the development of the SPS proton-antiproton collider proposed by Carlo Rubbia. Once, having met him by chance in front of the CERN library, I asked about his first long visit to CERN. I was surprised to hear that he had got convinced that the future of particle physics would be in the proton-antiproton collisions advocated by Carlo and not in his dear electron–positron annihilations.

Few months later he was so sick that he had to go to the La Tour Hospital in Meyrin. As others of his friends, I went to visit him a few times. On July 22, 1978, I wrote on Corriere della Sera an article titled “Who was the man of number 137?” recounting that, in my last visit, I did not find him in the usual room. When, having spotted the room, I excused myself for the delay he said “…because the real problem is the number of this room”. After a pause he added “This is the problem around which I have hovered throughout my life without success”. Another pause and then “Sai, Ugo, Pauli fu messo in una stanza d’ospedale numero 137 prima di morire”, “You know, Ugo, Pauli was brought in a hospital room number 137 before dying”. At the end of Sect. 8.7 (p. 106) I discuss the present understanding of the number 137 in the framework of Grand Unified Theories of the strong, electromagnetic and weak interactions.

2 Homage to Bruno

In the well-known figure of the normalized hadronic cross-section, R, as a function of the energy Ecm = √s, which describes the full electron–positron landscape (Fig. 8.1), I have called ‘Bruno’s domain’ the energy range that goes up to Ecm = 40 GeV and is dominated by the exchange of a virtual photon, with all its radiative corrections. At larger energies, the creation of neutral and charged intermediate boson plays the major role so that this energy range can be called ‘electroweak domain’. The figure shows that, at the end of Bruno’s domain, R equals the fraction 33/9, expected if the five types of produced quarks come in three colours.

Fig. 8.1
figure 1

Bruno’s domain is dominated by the peaks of the resonant production of quark-antiquark pairs, and the electroweak domain by the Z0 peak. (The compilation of e+e data is taken from [2])

In Fig. 8.2 the total cross-sections of the main LEP processes are plotted as functions of the centre-of-mass energy. The red curve shows that, around Ecm = 210 GeV, the Standard Model Higgs production cross-section, with a hypothetical mass MH = 115 GeV, is about 100 times smaller than the W+W cross-section.

Fig. 8.2
figure 2

Values of the cross-sections measured by L3 [3] and corresponding behaviours predicted by the Standard Model (courtesy of CERN)

In 20% (70%) of the cases, a Z-boson decays into invisible neutrinos and antineutrinos pairs (into quark-antiquark pairs); lepton pairs contribute the remaining 10%.

3 The Four LEP Detectors

In 1982, the LEP Experiment Committee and the Director General Herwig Schopper [4] approved two general-purpose detectors (ALEPH and OPAL) and two specialized detectors (DELPHI and L3). The first three were about 12 m tall while L3, being 20 m high, was definitely larger. The first spokespersons were Jack Steinberger, Aldo Michelini, Ugo Amaldi, Sam Ting, and technical coordinators were Pierre Lazeyras, Alasdair Smith, Hans Jürgen Hilke and Alain Hervé.

The superconducting coil of ALEPH produced a 1.5 T field and contained a 2-layer double-sided micro-vertex silicon detector and a large Time Projection Chamber (rose in Fig. 8.3—diameter = 3.6 m) which measured—with the typical long longitudinal drifts—the energy deposition of charged particles, recording both the position of 21 track segments and the corresponding energy losses ∆E/∆x for particle identification. The electromagnetic calorimeter (green in Fig. 8.3)—located inside the superconducting coil—was based on lead sheets and wire-chambers.

Fig. 8.3
figure 3

The four LEP detectors (courtesy of CERN)

OPAL adapted a conservative design very similar to the one of JADE [5], a very successful detector built for the DESY PETRA electron–positron collider in the years 1978–1986. The room-temperature coil produced a 0.4 T magnetic field. Inside the coil, a 2-layer single-sided microstrip silicon detectors and a Jet Chamber (red in Fig. 8.3) measured the charged particles. The electromagnetic calorimeter was made of 9440 lead glass Cherenkov counters. As in the other detectors, a hadron calorimeter, and many muon chambers (green in Fig. 8.3) covered the full solid angle.

DELPHI was specialised in hadron tagging. It had a lower field than ALEPH (1.2 T) but larger diameter so that inside it a Time Projection Chamber (diameter = 2.2 m) was surrounded by two Ring Imaging Cherenkov (RICH) counters (yellow in Fig. 8.3). This novel detector recorded the rings of photons produced through Cherenkov effect in a liquid and a gas radiator distinguishing kaon from pions of relatively large kinetic energies. The micro-vertex was made of 3 layers of double-sided silicon detectors and silicon pixel detectors covered the forward angles.

L3 was specialised in the accurate measurement of photon/electron and muon energies. The room-temperature solenoid of L3 had a very large diameter: 15 m. The main detector for measuring the curvature of charged particles was a small-radius very precise Time Expansion Chamber (radius = 50 cm) that followed a two-layer double-sided micro-vertex silicon detector. Externally there was the electromagnetic calorimeter made of about 12 000 crystals of bismuth germanium oxide. Outside the hadron calorimeter, three layers of very large drift chambers (green in Fig. 8.3) provided accurate measurements of muon momenta.

In the years 1982–1989 LEP was built, under the direction of Emilio Picasso, 100 m below the plain between the Geneva airport and the Jura Mountain (Fig. 8.4).

Fig. 8.4
figure 4

Layout of LEP and locations of the four detectors [6]

The first events were registered in August 1989 and, for eleven years, the four detectors collected data between 89 to 209 GeV. But the LEP Z-decays were not observed for the first time at LEP: four months before at SLAC about 100 events had been registered by the MARKII detector [7]—later substituted by the SLAC Large Detector (SLD)—mounted on the very innovative collider proposed by Burton Richter only ten years before. In the Stanford Linear Collider (SLC) beams of electrons and positrons of about 50 GeV were accelerated by the 2-mile long SLAC linac and brought to the collision point by two large 180° magnetic arcs [8]. Eventually, SLD logged 500 000 Z-events while at LEP I (1989–1995) each of the four CERN detectors registered about 4 106 Z decays. In the higher energy run, called LEP II (1996–2000), the centre-of-mass energy increased step-by-step from 180 to 209 GeV and each detector collected about 10 000 events.

4 The Electroweak Sector of the Standard Model

In 1992 Physics Letters published a paper that was signed by ‘The LEP collaborations: ALEPH, DELPHI, L3 and OPAL’ [9]. The more than thousand authors were referred to in a simple footnote: ‘Lists of authors can be found in refs. [1,2,3,4]’. The paper was due to a team of experts of the four Collaborations chaired by Jack Steinberger, who originated it and “insisted that the combination was a job for the experimentalists from the four collaborations rather than for the theorists. This led to the establishment of the Electroweak Working Group collaborative effort across the experiments” [10]. This was the first of many LEP Working Groups of the ADLO ‘second-order’ collaboration; some of them are quoted in the next Sections.

The summary table of the 1992 paper had 11 entries. Twenty years later, the LEP Electroweak Working Group produced Fig. 8.5 with 18 entries [3].

Fig. 8.5
figure 5

Table produced in March 2012 by the LEP Electroweak Working Group [3]

Behind this table there is a second unprecedented feature of the activities developed around LEP: the work of more than hundred theorists who—along the years—computed higher-order processes that contribute to the physical quantities measured by the experimentalists. Examples are given in Fig. 8.6.

Fig. 8.6
figure 6

Virtual Higgs and top quarks affect a the Z-mass and b the Z-decay in b- quarks

This coordinated process started with two CERN Yellow Reports bearing the title ‘Physics at LEP’ that were distributed in 1986 [11], three years before the first collisions. They were edited by John Ellis and Roberto Peccei who, in their Introduction to the first volume, wrote: “Thanks largely to the initiative of its then Chairman, Günter Wolf, the LEP Experiments Committee asked us, the two theorists on the Committee, to organize this new survey. We identified five principal areas of LEP physics, namely: precision studies at the Z peak; toponium; searches for new particles, QCD, gamma-gamma and heavy quark physics; and high-energy running beyond the W+ W threshold. Working Groups (WG) were set up for each one of these areas.” The first contribution on ‘Precision tests of the electroweak theory at the Z’ was written by Guido Altarelli, chair of the corresponding WG.

All along the LEP lifetime, Guido Altarelli and John Ellis have been the theorists who not only worked themselves on these problems but also urged colleagues to compute new processes and helped the experimentalists to best interpret their data.

Radiative corrections—as the ones depicted in Fig. 8.6—depend only logarithmically on the Higgs mass but are much more sensitive to the top mass mt; this gave rise to an interesting episode. In March 1994 at the Moriond Meeting, the latest fit to the LEP most precise measurements of the time was presented together with the best value of the top mass: mt = (172 ± 13 ± 18) GeV. Few months later the CDF Collaboration announced the detection at Fermilab of 12 top-decays with a measured mass mt = (174 ± 10 ± 13) GeV that was, within the large errors, superposable with the LEP best fit.

Going back to Fig. 8.5, four entries are contributed by non-LEP experiments: the left–right polarization asymmetry Al (uniquely measured by the SLAC Large Detector), the mass and width of the W-boson (measured by CDF and D0 at Fermilab and at LEP), and the mass of the top-quark (discovered and measured at Fermilab).

The fourth column of the table gives the best-fit values of the 18 quantities when radiative corrections are properly considered. The histogram to the right of the figure shows by how many standard deviations each result differs from its best fit value (the so-called ‘pull’). A glance is enough to state that the fit is very good.

For reasons of space, the meaning of the various quantities and their measurements cannot be treated here. I limit myself to two remarks before discussing in depth a particular subject: b-tagging.

Firstly, among the LEP data the most precise measurements concern the Z mass (±0.0023%), the Z width (±0.09%), the hadronic cross-section σ0had at the Z peak (±0.09%) and the fraction Rb of b-quark events on all hadronic events (±0.3%). These accuracies surpass any prediction made before data taking.

Secondly, the cross-section σ0had is so precisely measured because of the enormous amount of work done, theoretically and experimentally, to measure very accurately the luminosity, i.e. to compute the cross-section of very forward electron–positron (‘Bhabha’) scattering and to construct sophisticated and mechanically accurate electron/positron detectors, which—placed downstream of the collision point—measured very precisely the electron and positron scattering angles, as discussed in [12, 13] for the OPAL and DELPHI detectors.

Considering now ‘b-tagging’, this novel technique has been very important in the LEP experimental program because it was used not only to measure—as indicated in Fig. 8.5—three of the eighteen parameters (the fraction of b-quark-pairs Rb, the forward–backward asymmetries Afb0, and the polarization asymmetry parameter Ab) but also to search for the Higgs boson and to measure the running of the mass of the b-quark, subjects that are discussed in the next two Sections.

The four micro-vertex silicon detectors are shown in Fig. 8.7.

Fig. 8.7
figure 7

ad The LEP micro-vertex silicon detectors were located around a thin beryllium beam pipe with a Rpipe = 55 mm radius [14,15,16,17] (courtesy of CERN). e Characteristics of the LEP and SLD micro-vertex detectors. The table has been compiled by Chiara Mariotti [18]

Their main feature the four LEP micro-vertex detectors was the 20–30 m \(\upmu \) accuracy in the measurement of the coordinates and z, while the z-resolution of the SLD micro-vertex was only 13 \(\mathrm{\mu m}\).

Figure 8.8a explains how the transverse mismatch δ of a track, due to the decay of a hadron containing a b-quark, is measured and Fig. 8.8b compares the experimental and the Monte Carlo event distributions showing how a cut in S = δ /σδ. can increase the purity of the sample while reducing the efficiency. The variations of the efficiencies with the purity of the sample are quantitatively shown in Fig. 8.9a.

Fig. 8.8
figure 8

a Definition of the mismatch δ. b Comparison between data and a Monte Carlo calculation. The x-axis represents the mismatch δ divided by its standard deviation σδ [19]

Fig. 8.9
figure 9

a The b-tagging efficiencies decrease with the required purity of the sample. (Courtesy of Chiara Mariotti, CERN) b Summary of the data that give the average Rb value of Fig. 8.5 [19]. The ± 0.0008% LEP error is 20 times smaller than the one estimated before the beginning of data taking

The first layer of the LEP detectors was at about 65 mm from the centre of the vacuum pipe, while this quantity was 29 mm for SLD (lower part of Fig. 8.7). This, together with the smaller primary vertex resolution, is the reasons for the SLD larger b-tagging efficiency (Fig. 8.9a).

As far as LEP II is concerned, Fig. 8.10 reproduces the measurements by the L3 Collaboration [20] that shows the W+W production cross-section, which is the sum of the three contributions depicted in the upper part of Fig. 8.10a. This cross-section would not flatten with energy without the ZWW triple gauge coupling, which in the electroweak theory is due to the non-Abelian nature of the SU(2) group [21]. The figure shows also that the Spin Matrix Elements, plotted in Fig. 8.10b as functions of the W polar angle, perfectly agree with the Standard Model predictions.

Fig. 8.10
figure 10

a WW total cross-section. b Spin Matrix Angle versus the W polar angle

5 In Search of the Higgs Boson

From the March 2012 best fit of Fig. 8.5, the LEP Electroweak Working Group obtained for the Higgs mass the result [3] MH = (94 + 29 – 24) GeV, so that MH was predicted to be smaller than 152 GeV with a 95% confidence level. This is an indirect limit but, of course, even before LEP was built the direct detection of Higgs bosons decays was on top of the foreseen searches. Figure 8.11 shows the main decay channels; the detection of these events profits from a large b-tagging efficiency.

Fig. 8.11
figure 11

The best channel to detect Higgs bosons is the two-jet decay of both H and Z. To this end, double b-tagging is extremely useful, but the efficiency of Fig. 8.9a enters quadratically

In 1986, at the Aachen ECFA Workshop on LEP 200, Sau Lan Wu reported the conclusions of the Higgs Working Group [22]: “At centre-of-mass energy of 200 GeV significant signals are certainly observable up to MH = 80 GeV from the missing energy channel and up to MH = 70 GeV from the 4-jet channel”. Fourteen years later LEP experiments were engaged in searching for a 115 GeV Higgs.

The energy that an electron/positron loses in synchrotron radiation increases as the fourth power of the beam energy so that, given the LEP diameter, at 100 GeV the energy loss per turn was about 3 GeV. The losses were replenished by the RF cavity system that, in the years 1996–1999, was continuously upgraded, as shown in Fig. 8.12a. This was made possible by the leadership of Emilio Picasso [23], who had been Project Leader of the superconducting (SC) cavity group, and the invention by C. Benvenuti of SC cavities built by coating, with the ‘sputtering’ technique, the inner surfaces of copper cavities with a thin niobium layer [24].

Fig. 8.12
figure 12

a In 1995–99 the LEP toral RF voltage was increased by adding SC cavities and increasing their gradient from 6 to 7.5 MV/m [25] (courtesy of CERN). b In MSSM the mass of the lightest neutral Higgs depends on two parameters: the mass of the axial boson A and tgβ [26]

At CERN, the upgrade program was the subject of many animated discussions because it was shown, in 1994–95, that the minimal supersymmetric extension of the Standard Model (MSSM) made a prediction that could be tested at LEP II. In Supersymmetric (SUSY) theories there is one ‘superparticle’ (often called ‘sparticle’) for each Standard Model (SM) particle, a fermion for a boson and a boson for a fermion [27]. SUSY is ‘broken’ because the sparticles are heavier than about 100 GeV. Radiative corrections cause divergences of the Higgs mass but disappear in SUSY because of the cancellation between the virtual effects of particles and sparticles. However, not to spoil this cancellation, the superparticles must have masses below about 1000 GeV, so that one speaks of ‘low energy’ SUSY.

MSSM predicts the existence of a ‘light’ Higgs boson ‘h’ and a heavier Higgs boson ‘H’, of an axial boson ‘A’ and of two charged Higgs H+ and H. The mass of the lightest Higgs depends on the mass of the axial boson A and a parameter tgβ; as shown in Fig. 8.12b. The limit follows from delicate calculations because, at the lowest order, MH is lighter than MZ but large radiative corrections, in which the top mass plays a key role, push it above MZ [26, 28,29,30]. For the top mass known at the time the computed MH did not exceed 125–130 GeV, so that a 220–225 GeV collision energy would have been sufficient for detecting the processes of Fig. 8.11.

For this reason, in those years Daniel Treille—who at the time was DELPHI spokesperson—and others did everything possible to convince the CERN Directorate to invest about 70 million Swiss Francs in the construction of extra SC cavities and reach at least 220 GeV [31, 32]. But the new LHC accelerator—which was to be assembled inside the LEP tunnel—was at an advanced stage of planning and required significant resources, both financial and in personnel; therefore, the decision was finally made to invest in enough superconducting cavities to reach only 200 GeV in the centre of mass, as described by Kurt Hübner in [33].

In the year 2000 the experiments began to collect data, at the maximum energy, knowing that by autumn LEP had to stop. ALEPH observed one, then two, and three events, which could be attributed to the decay of a Higgs boson (Fig. 8.13). L3 also observed a candidate in the missing momentum channel (Fig. 8.11), and OPAL and DELPHI joined with 2 and 1 events, compatible with their backgrounds.

Fig. 8.13
figure 13

An ALEPH candidate and the first 10 candidates ordered by statistical weight [34] (courtesy of CERN)

CERN Director General Luciano Maiani was required to make a difficult decision. If he were to delay by a year the end of LEP, the thousands of people working on the LHC project would lose enthusiasm and CERN would have had to pay penalty charges of about 100 million Swiss Francs to companies ready to dismantle LEP.

More than 10 years later Luciano Maiani wrote [35]: “It was necessary to kill LEP, the king of CERN, to build a larger giant, the LHC. I did it. There was much stress, which I feel as I write, it was really a transition drenched with great emotion. As well as a stubborn exercise of rationality. […] I could write [to those who wanted to run for another year] with some justifications: ‘The chance of finding ourselves by autumn of next year still with only a 3–3.5 sigma effect is not at all negligible. […] At this point, we would have spent all our financial reserves, time and credibility on a very, very risky bet. I have never cared for poker.’”.

After a one-month prolongation, LEP was switched off on 2 November 2000. Few weeks later, the ALEPH team published a paper that concluded [36]: “The observation is consistent with the production of a Higgs boson with a mass near 114 GeV. More data, or results from other experiments, will be needed to determine whether the observations reported in this letter are the result of a statistical fluctuation or the first sign of direct production of the Higgs boson.” In the following years The LEP Working Group for Higgs Boson Searches critically analysed the events of the four Collaborations and combined the data concluding that (i) the signal for a Higgs with 114 GeV mass had a significance of 1.7 standard deviations and (ii) the Higgs mass had to be larger than 114.4 GeV (95% CL) [37].

The lower limit 114.4 GeV (95% CL) must be considered together with the 152 GeV (95% CL) upper limit quoted at the beginning of this Section and obtained in March 2012 with the best fit of Fig. 8.5. The (about 95% CL) interval 114.4–152 GeV—that, with a cavalier approximation, can be written as MH = (133 ± 10) GeV—brackets the 125–127 GeV value announced at CERN, four months later, by Fabiola Gianotti and Joe Incandela on behalf of the ATLAS and CMS Collaborations. Ten years later, the best value is MH = (125.1 ± 0.2) GeV, which is at the limit of the MSSM parameter space (Fig. 8.12b) and would have been detected with a very long LEP run, if the electron–positron centre of mass energy had reached 220 GeV [31].

The four LEP Collaborations have excluded the existence of many other hypothetical particles, but there is space here to mention only a very topical subject. Dark Matter (DM) candidates of mass smaller than MZ/2 can be excluded if the Z couples to them even with a probability 6–7 orders of magnitude smaller than the coupling to neutrinos [38]. Moreover, LEP data on single-photon events with large missing energy constrain the coupling of DM in the 10 s GeV mass range to electrons, providing limits complementary and competitive to those from direct searches for DM-nucleon scattering and indirect astrophysical searches [39].

6 Quantum Chromodynamics

Quantum Chromodynamics (QCD)—the SU(3) colour group theory of quarks and gluons [40]—was well-established before LEP, as written by Guido Altarelli in 1989 [41]: “At present, it is fair to say that the experimental support of QCD is quite solid and quantitative. The forthcoming experiments at pp colliders, at LEP, SLC, and HERA will certainly be very important with their great potential for extending the experimental investigation of the validity of QCD.”

The advances brought by LEP I and LEP II to the measurement of the ‘running’ strong coupling αs, which becomes feebler with the energy scale Q, are clearly seen by comparing Fig. 8.14b with Fig. 8.14a: in fifteen years the error was reduced by a factor four. As discussed in the rest of this Section, with better calculations and further data analyses, eventually the error shrank by a factor six-seven.

Fig. 8.14
figure 14

Energy dependence of the strong coupling αs a before the start-up of LEP (1989 review paper by Altarelli [41]). b after the stop of LEP (2004 review paper by Bethke [42])

When the energy Q increases the strong coupling αs(Q) ‘runs’ towards smaller values so that in hadrons the quarks hit by high-energy mediators are ‘asymptotically free’. The running is due to the colour charge of the gluons and can be explained by considering that, in the quantum description of an isolated (electric or colour) ‘charge’, energy can be borrowed for short times to make evanescent ‘virtual’ quanta of the force field and ‘virtual’ particle-antiparticle pairs. These virtual particles disappear rapidly, but others come up so that around a charge there is a dynamical medium in equilibrium, with heavier particles closer to the charge. In the (non-Abelian) U(1) gauge theory, the central negative electric charge polarizes this medium in the sense that the positive charged particles (far away mainly virtual positrons) are attracted, and the negative ones (negative electrons) are repelled, while uncharged virtual photons are unaffected. Moving from the centre—i.e., probing the source charge with photons of decreasing energy Q—the overall electric charge decreases because of the screening effect, due to a thicker layer of virtual medium.

Differently, in the (Abelian) SU(3) gauge theory, around a colour charge, the medium contains quark-antiquark pairs and gluons, which carry a colour charge and produce a strong anti-screening effect, so that the overall colour charge becomes stronger when probed with gluons of smaller and smaller energies Q.

The local slopes of the lines of Fig. 8.14 can be computed with the equations of the renormalization group (considering also small second order corrections [43]) by making hypotheses on the masses of all fermions and bosons that appear and disappear in the virtual medium. More precisely, a mediator of energy Q probes the medium down to distances /Q so that, at each energy, only virtual particles that have mass smaller than Q influence the slope of the line representing αs(Q).

In 1989, with reference to Fig. 8.14, Guido Altarelli wrote [41]: “The prediction for αs to be measured at LEP is very precise: αs(MZ) = 0.110 ± 0.001. Establishing that this prediction is experimentally true would be a very quantitative and accurate test of QCD, conceptually equivalent but more reasonable than trying to see the running in a given experiment.” This is the approach followed in this Section in discussing the LEP very accurate values of αs(Q) obtained by (i) measuring quantities that describe the event shape, (ii) determining the fractions of 3-jets and 4-jets events and (iii) performing fits to electroweak data, as the one of Fig. 8.5.

The complex processes involved in hadron production are depicted in Fig. 8.15. The first step is the clean creation—through the exchange of a virtual gamma and a Z-boson—of a quark-antiquark pair, which is followed by, as second step, the irradiation of gluons and the creation of other pairs. An enormous theoretical effort has gone in the QCD calculation of this second step. The status before LEP was described in a CERN Yellow Report edited by Altarelli et al. [44]. Then the calculations improved from next-to-leading order in perturbation theory (NLO, O(αs2)) to next-next-to-leading order (NNLO, O(αs3)), to resummation in next-to-leading-logarithmic approximation (NLLA), arriving—for some processes—to N3LO.

In the chain of processes, represented in Fig. 8.15a, the energy scale Q decreases and the strong coupling increases getting close to 1, so that to describe the third step (‘Hadronization’) perturbative computations are not possible and Monte Carlo models must be used. The main ones, graphically described in Fig. 8.15b, are based on two different approaches: ‘String fragmentation’ and ‘Cluster fragmentation’ [45].

Fig. 8.15
figure 15

a Hadron production is computed by nesting four subprocesses: creation of a quark pair. QCD high order calculations, hadronization and decays. (Figures adapted from Phenomenology of Particle Physics I, V. Chiochia, G. Dissertori, Th. Gehrmann, ETH, Zurich.)

Subsequently, the hadrons decay; this fourth step (‘Decay’) is easily computed by using the available experimental data on the various branching ratios.

Considering the event shape, the DELPHI measurements of 18 different parameters are summarized in Fig. 8.16 [32, 46].

Fig. 8.16
figure 16

In this analysis, 18 event shape parameters have been considered. As an example, ‘thrust’ is obtained by finding a versor that maximizes the sum of the projected momenta

In 2006 the LEP QCD Working Group computed the averages of the strong coupling from event shapes measured at LEP I and LEP II [47]. The theoretical uncertainty dominates because of the of missing higher order contributions:

$$ \alpha_{s} \left( {M_{Z} } \right) \, = \, 0.1202 \pm \, 0.0005\left( {{\text{exp}}} \right) \, \pm \, 0.0042\left( {{\text{theo}}} \right). \cdots \left( {\text{LEP QCD WG}} \right) $$
(8.1)

Jet-rates are more suited than event shape parameters for precise determinations of the strong coupling constant because they have smaller theoretical errors. Figure 8.17 shows the OPAL results on the fraction of 2, 3 4 … jets. The closeness of the dashed and red curves shows that the hadronization corrections are small.

Fig. 8.17
figure 17

Measured and computed fractions of n-jets plotted versus the resolution parameter ycut [42]. In first order the fraction R3 is proportional to the strong coupling, which in QCD is inversely proportional to the logarithm of the energy Q divided by the strong scale \(\Lambda \)

In the most used ‘Durham clustering algorithm’, to define ycut one considers, for any two particles, the test variable yij that is, essentially, the square of the relative transverse momentum. If yij is smaller than ycut, particles i and j are combined in a single object by summing the two four-momenta. The combination procedure is repeated until no particles can be further combined; the remaining objects are defined as ‘jets’. The algorithm is such that it can be applied both to measured tracks in an event and to the partons of a perturbative calculation.

Considering now 4-jet events, I first quote the result obtained by OPAL from a detailed study of both 3-jets and 4-jets events [48]:

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1177 \pm \, 0.013\left( {{\text{stat}}} \right) \, \pm \, 0.0036\left( {{\text{sys}}} \right).\, \left( {{\text{OPAL}}} \right) $$
(8.2)

Secondly, in Fig. 8.18 the results of an ALEPH analysis of 4-jets events are plotted versus the logarithm of the resolution parameter ycut [49]. As shown in the figure, an intermediate ycut range was used to fit the experimental data and obtain

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1170 \, \pm \, 0.0001\left( {{\text{stat}}} \right) \, \pm \, 0.0013\left( {{\text{sys}}} \right).\, \left( {{\text{ALEPH}}} \right) $$
(8.3)
Fig. 8.18
figure 18

The fraction R4 is a second order process, proportional to αs2, and its measurement gives smaller errors on αs(MZ) than the ones obtained from a measurement of R3

The result has a ± 1.1% overall error.

In the decays of tau-leptons, the hadronic branching fractions and the spectral functions are sensitive to the strong coupling. The final ALEPH analysis by Michel Davier and collaborators, published many years after the end of the last LEP run [50], gave αs(mtau) = 0.332 ± 0.005(exp) ± 0.011(theo). By evolving this coupling to the Z-mass, the absolute errors on αs reduces drastically

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1199 \pm 0.0006\left( {{\text{exp}}} \right) \pm 0.0012\left( {{\text{theo}}} \right) \pm 0.0005\left( {{\text{evol}}} \right)_{.}\, \left({{\text{ALEPH}}} \right). $$
(8.4)

A fourth method to obtain αs(MZ) uses the electroweak precision fits discussed in the previous Section. A recent analysis is described in [51].

In 2019 Siggi Bethke summarized the results of the experimental and theoretical work done on all LEP data in a paper written in memory of Guido Altarelli [52]

$$ \begin{aligned} from \, event \, shapes \, and \, jets: & \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1196 \pm \, 0.0036.\, \left( {\text{in NNLO}} \right) \\ from \, tau \, decays: & \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1192 \pm \, 0.0018\, \left( {{\text{inN}}^{{3}} {\text{LO}}} \right) \\ from \, electroweak \, precision \, fits: & \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1196 \pm \, 0.0030.\, \left( {{\text{in N}}^{{3}} {\text{LO}}} \right) \\ \end{aligned} $$
(8.5)

Considering the errors as uncorrelated, these measurements can be combined giving a single number, the results of hundreds of experimental and theoretical papers and about 13 million LEP hadronic events recorded in the years 1989–2000:

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1194 \pm \, 0.0014.\,\left( {{\text{LEP result from Ref}}. \, \left[ {{52}} \right]} \right) $$
(8.6)

Figure 8.19 shows the Review of Particle Physics (RPP) most precise data on αs(Q) from all the reactions measured at all accelerators.

Fig. 8.19
figure 19

RPP summary (2021) of the available measurements of αs(Q) [53]

The detailed analysis of the LEP data, presented in the 2021 RPP [53], gives practically the same result as Eq. (8.6) but with a slightly larger error

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1186 \pm \, 0.0016.\, \left( {\text{LEP result from Rev}}.{\text{ Part}}.{\text{ Phys}}.{ 2021} \right) $$
(8.7)

The conclusion is that the final LEP error on αs(MZ) is six-seven times smaller than the error on in 1989, before LEP start-up, which was ± 0.01, as shown in Fig. 8.14a.

It is interesting to remark that, from Fig. 8.19, the 2021 world average is

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, 0.1179 \pm 0.0009.\, \left( {{\text{world average }} - {\text{ Rev}}.{\text{ Part}}.{\text{ Phys}}.{ 2}0{21}} \right). $$
(8.8)

which has an error about 70% smaller than the LEP result of Eqs. 8.6 and 8.7.

It has to be added that, in the last years, the lattice calculations of αs(M)) have improved so much that the world average quoted in [54]

$$ \alpha_{{\text{s}}} \left( {M_{{\text{Z}}} } \right) \, = \, {0.11803} + 0.00047-0.00068.\, \left( {\text{world average of lattice calculations}} \right) $$
(8.9)

has an error that is about half the one of the measured world average of Eq. 8.8.

The determination of the uncertainties is very delicate, as discussed in a recent paper [55]. At any rate, it is easily predictable that, in a few years lattice calculations—which use as input the quark bare masses—will produce a value of αs(MZ) with a much smaller error. At that point, the authors of RPP might decide to use the output of lattice calculations as recommended value forgetting all experimentally measured data. After such a decision, the strong sector of the Standard Model will be on a different footing than the electro-weak sector because the parameters of the U(1)xSU(2) group—including the couplings α1 and α2 of the ‘pure’ electromagnetic interaction U(1) and the ‘pure’ weak interaction SU(2)—will be obtained from measurements of the electric charge, the Fermi constant, and the Z and Higgs masses, while the coupling αs of the SU(3) group will be given by lattice calculations, in which the quarks masses will have to be introduced by hand.

To conclude this Section, I observe that the running of the strong coupling was well established before the LEP start-up. Instead, no information existed on another phenomenon predicted by QCD: the “running” of the quark masses and, in particular, of the b-quark asses. The after-LEP situation is shown in Fig. 8.20.

Fig. 8.20
figure 20

Figure adapted from [56] with the data on the b-quark mass of [56,57,58,59,60].)

The values of the b-quark mass (at the scale MZ) have been computed by measuring the fractions of 3-jets and 4-jets events that contain b-quarks. The results obtained at LEP and at SLC, using the b-tagging methods described in the previous Section, are plotted in the Fig. 8.20 [56,57,58,59,60]. Also in this case, but with less accuracy, the QCD prediction, represented by the yellow band, is experimentally confirmed.

7 Unification of the Forces and the First Microsecond

Well before LEP, theorists and experimentalists were performing global fits to the available experimental data on the properties of the intermediate bosons, parity violation in nuclei and neutrino-quark, neutrino-electron, electron-quark, muon-quark, and electron–positron collisions [61,62,63,64,65,66,67,68,69,70]. The two most active groups were led by John Ellis [63, 65, 67,68,69] and Paul Langacker [61, 62, 66, 70]. I had the occasion to contribute to these developments because—while working at CERN on neutrino physics with the CHARM experiment—I gave a talk on neutral currents at the Neutrino79 Bergen Conference [71]. There I discussed precision fits with Paul Langacker who, two years later, asked me to join his research group.

Fig. 8.21
figure 21

a Standard Model extrapolations of the values of the couplings (α1, α2, αs) measured in the 80’s below100 GeV [62]. b Same graph but drawn with data collected in the first year of LEP: the SM couplings do not cross as in a Grand Unified Theory (GUT) [72]

In 1987 the group published a review paper featuring Fig. 8.21a [62], in which α1(Q) and α2(Q) are the ‘pure’ electromagnetic coupling and the ‘pure’ weak coupling of the U(1) and SU(2) gauge groups; they are analogous to the SU(3) strong coupling αs(Q). As discussed at the beginning of Sect. 8.6, the couplings depend on the polarization of the medium of virtual particles that surrounds the central charge: αs−1 increases proportionally, in first order, to the logarithm of Q (as shown in Fig. 8.17) so that in Fig. 8.21 the line is practically straight, while α1−1 decreases, almost logarithmically, with Q. As said at the beginning of Sect. 8.6, at each energy Q the local slopes are determined by the virtual particles that have mass smaller than Q.

Figure 8.21a shows that, in 1987 the error bands were large and the only statement that could be made was that, at the level of 2–2.5 standard deviations, the forces did not unify. Four years later LEP data changed the situation (Fig. 8.21b): in the Standard Model unification was not obtained at the level of 7 standard deviations.

I was involved in the production of this figure because, in Fall 1990, I was invited to give a talk at the Texas-ESO-CERN Conference on Astrophysics that had to be held in December in Brighton. Since I wanted to bring some new perspective to the already much publicized LEP data, I visited John Ellis who remarked that the improved quality of the data had to have an influence on the unification of the forces. He knew the problem because he had been working on the paper of Ref. [68] in which, by considering the electroweak parameter sin2(θw), it was concluded that the MSSM reproduces the LEP measured value better than the Standard Model.

The day after I showed the graph of the 1987 paper to Wim de Boer, leader of the Karlsruhe group in DELPHI, and to his PhD student Hermann Fürstenau, who had already codes at hand. In the following weeks he modified them following my proposal to (i) introduce in the calculation of the slopes of the three lines the superparticles of MSSM as if they had a single ‘effective’ mass MSUSY and (ii) to compute MSUSY and its error by imposing the crossing in a unification point Q = MGUT. The plot, shown in Fig. 8.22a, vividly showed that the LEP data were consistent with the simplest low-energy Grand Unified SUSY Theory. Presented in a preliminary form at the Brighton Conference, the plot appears in its final form in the proceedings under the titled ‘LEP, the Laboratory for Electrostrong Physics, one year later’ [73].

Fig. 8.22
figure 22

Pages copied from John Barrow’s book ‘Cosmic Imagery’ published in 2008 [74]

Fig. 8.23
figure 23

Fitted value of MSUSY versus αs(MZ) [76]. The band is due to the statistical errors

At the beginning of 1991 we published the two figures in a CERN preprint and in a Physics Letters paper [72]. The reactions were overwhelming—I think because of (i) the visual power of the three converging lines and (ii) the novelty of the fitted masses MGUT and MSUSYwith their errors. These reactions were unexpected because among the experts it was known that the recent LEP data were better fitted by the minimal SUSY model than by the Standard Model [68,69,70]. The particle physics community got excited, and we received a lot of calls and emails. Wim de Boer and I were interviewed by daily newspapers and TVs [75]. Soon after the publication, many theoretical articles appeared in scientific journals improving our analysis, criticizing our simple approach, and better considering, for instance, threshold effects at the unification energy. Years later, in 2008, John Barrow in his ‘Cosmic Imagery’ summarized our paper with the two pages of Fig. 8.22 and wrote: “The converging of the running force strengths […] is a simple symbol of the Universe deep unity in face of superficial diversity, which is what we mean by beauty.”

Going back to 1991, in July at the Geneva EPS Conference Wim de Boer presented a new analysis [76] in which we had improved the previous parametric study of the unification parameters [72] by assuming that all the strongly interacting sparticles have mass Msquark and all the non-strongly interacting ones have mass Mslepton.

The results were \(M_{{{\text{GUT}}}} = { 1}0^{{{15.8} \pm 0.3 \pm 0.1}} {\text{GeV}}\), \(M_{{{\text{SUSY}}}} = { 1}0^{{{3.4} \pm {0.9} \pm {0.4}}} {\text{GeV}}\) and \(\alpha_{{\text{s}}} \left( {M_{{{\text{GUT}}}} } \right)^{{ - {1}}} = { 26.3} \pm { 1.9} \pm { 1.0}\). The first error was due to the experimental uncertainties of the time—on αs(MZ) but also on the electroweak parameter sin2 \(\theta \) w = 0.233 ± 0.008—and the second errors were the estimate uncertainties due to the SUSY mass spectrum.

One year later the experimental errors on the couplings were further reduced so that also MSUSY and MGUT were slightly better determined, as shown in Fig. 8.24b.

Fig. 8.24
figure 24

Unification plots computed with the best experimental values of summer 1992

Today, with αs(MZ) from Eq. (8.8) (2021) and the latest sin2 \(\theta \) w error, Fig. 8.23 gives

$$ M_{{{\text{SUSY}}}} = { 10}^{{{2.7} \pm {0.35} \pm {0.4}}} {\text{GeV}}, $$
(8.10)

so that, by combining the errors quadratically, MSUSY = 102.7±0.5 GeV, which says that, in the framework of this simple model, the spectrum of the supersymmetric particles has an effective mass MSUSY ≈ 500 GeV—the logarithmic centre of the 95 % CL range 50–5000 GeV. Such a statement is weak but nontrivial: MSUSY could have come out orders of magnitude larger than 1000 GeV, which is the upper limit for the cancellation of the divergences in the Higgs mass due to the opposite virtual effects of particles and their supersymmetric partners. Moreover, MGUT is well below the Planck mass and its numerical value does not violate proton decay bounds.

It is worth noting that plots as the one of Fig. 8.24b indicate that MSSM may be valid, but, of course, many non-supersymmetric unified models can be constructed [77]. The plot of Fig. 8.24b can also be used to describe the phenomena that happened at the beginning of the Universe by (i) reading the x-axis from left to right, (ii) identifying the energy scale Q with the temperature of the primordial medium, and (iii) recalling the simple thermodynamical relation QGeV ≅ Tµs−½, where Tµs is the cosmic time measured in microseconds.

The drawing of Fig. 8.25 is a figure that I have been using for many years [78, 79], and appears in my Springer book ‘Particle accelerators: from Big Bang physics to hadron therapy’ [80].The grey areas represent three transition regions: (1) the phenomena that originated the electro-strong breaking are unknown; (2) the phase transition at T ≈ 10–11 s was caused by the electro-weak symmetry breaking due to the Higgs field; (3) the disappearance of the quark-gluon plasma and the appearance of hadrons, happened when the increasing strong coupling got close to 1.

Fig. 8.25
figure 25

Time evolution of the couplings in the framework of the minimal SUSY model

As shown in Fig. 8.25, at the divergence time the inverse of the electromagnetic coupling α−1—a linear combination of α1−1 and α2−1—had the value α−1≈ 68, at the cosmic time T = 1 µs was α−1≈ 128 and, in the present very cold Universe, is 137, twice greater than at the beginning. This evolution is dictated by the masses of all the particles (whatever their nature) that virtually exist around each charge. In this simple GUT model, all the sparticles have masses smaller that 5 TeV and, in the fast running towards their destination, the three couplings traverse a Great Desert.

Going back to my recollections of Sect. 8.1, in a Grand Unified Theory, even without SUSY, the number α−1≈ 137—which occupied so much the mind of Bruno Touschek—is not so important because it is, at least in principle, calculable from the initial coupling αs(MGUT)−1—the really fundamental quantity in a Grand Unified Theory, which in Fig. 8.25 is 8\(\pi \)[81]—and the masses of the particles (whatever their nature) in the enormous range that goes from zero to MGUT. In future such a calculation may be feasible IF a Great Desert occupies the central part of Fig. 8.25.

In the last years the LHC experiments have excluded a large fraction of the MSSM 5-parameters phase space so that many theorists are convinced that a low-energy minimal SUSY theory is no longer defendable. However, even if the detailed behaviours of the curves of Figs. 8.24b and 8.25 are not supported by the present experimental situation, many experts think that there are still corners of the enormous available phase space for more elaborated version of a low-energy supersymmetric theory. For instance, John Ellis and collaborators have studied a minimal supersymmetric extension of MSSM with 11 parameters (pMSSM11) [82].

Personally, I believe that, even if simplest MSSM is not realized in Nature, some form of low-energy supersymmetry, with a Great Desert, is still a viable Grand Unified theory so that plots, as the ones of Figs. 8.24b and 8.25, will be drawn and used also in future for both scientific purposes and science popularization.

8 LEP Highlights and Its Legacy to the LHC Experiments

An enormous amount of coordinated experimental and theoretical work has been invested in the writing of the about 2600 papers published by the four LEP collaboration, of which about 15% have been produced after 2004 [83]. The quality and the amount of the results were such that a big effort has gone also in keeping the data available for future analyses [83]. Moreover, the main protagonists of this endeavour wrote hundreds review papers of the many experimental results; for space reasons, I have discussed only a small personal selection. In many of these reviews—see, for instance, [31, 32, 84]—it has been underlined that the precisions achieved are by far better than what was foreseen before LEP start-up. This is the opinion expressed by Wilbur Venus [32] few months after the end of the last run.

What did LEP achieve? The new physics initially anticipated (W, Z) was there. Due to the clean initial situation, hermetic detectors, etc, it was probed with unprecedented precision, typically 2 orders of magnitude better than before LEP started (e.g. MZ was measured to ±2.1 MeV, ΓZ to ±2.3 MeV, the number of neutrinos Nν = 3 to 1 part in 350, Rb to ± 0.3%, which is 20 times better than initial hopes, MW to ±39 MeV [in the final analysis ±33 MeV]; and mt was predicted correctly, universality was tested at the ∼1 per mille level in electroweak interactions and to 1% in QCD, the cancellation of WW production amplitudes required by gauge theory was tested at the 1% level, and purely weak loop corrections at the ∼10% level.

LEP also brought deeper knowledge of heavy flavours, deeper understanding of QCD, and showed that GUTs work with SUSY but not without. And the new particle searches were remarkably complete and rigorous, leaving very few corners still unexplored (and squeezing minimal SUSY into a very tight one!). But there were no further surprises. Apparently, nature chose to be at her most boring?

Frank Wilczek at the CERN LEPfest of November 2000 said: ‘The historic achievement of LEP has been to establish with an astonishing degree of rigor and beyond all reasonable doubt what will stand for the foreseeable future - perhaps for all time - as the working Theory of Matter ..... and to give us some very definite and specific clues for what lies beyond.’

The reasons for the successes of LEP experiments were many and each LEP physicist has his own list. I like the one proposed by Jurgen Drees [84]:

“Why was LEP so successful? Many fortunate facts had to come together:

  • A highly dedicated machine group responsible for the excellent performance of LEP,

  • low background in the detectors,

  • good performance of all detectors from the pilot run in August 1989 till the end of data taking,

  • effective division of work between CERN and the outside laboratories,

  • close cooperation between the 4 collaborations and, also, between LEP and SLD (without avoiding competition),

  • close cooperation between experiments and the machine group,

  • and, very important, close cooperation with theory groups.”

The LEP detectors developed novel techniques and methods that worked better than initially foreseen, in particular the micro-vertex detectors discussed in Sect. 8.4. As shown in Fig. 8.26, these techniques have been left as a material legacy to the Large Hadron collider experiments, which used them but had to introduce substantial improvements because the running conditions, the event rate and the backgrounds are harsher than at LEP.

Fig. 8.26
figure 26

LEP techniques, methods and hardware components used by the LHC experiments

However, the main legacy of LEP to LHC experiments is immaterial: the Standard Model, which was checked from all points of view in the finest details and with accuracies unforeseen before the start-up of the largest electron–positron collider ever built, which has its origin in the minuscule ADA ring, built sixty years ago by Bruno Touschek and collaborators in less than one year. The LEP Standard Model legacy was accompanied by sophisticated software codes, describing hadronization processes and hadron decays, which are essential for computing at LHC the signatures of novel phenomena and their backgrounds.