Abstract
The measurement of the luminosity recorded by the CMS detector installed at LHC interaction point 5, using proton–proton collisions at \(\sqrt{s}=13\,{\text {TeV}} \) in 2015 and 2016, is reported. The absolute luminosity scale is measured for individual bunch crossings using beam-separation scans (the van der Meer method), with a relative precision of 1.3 and 1.0% in 2015 and 2016, respectively. The dominant sources of uncertainty are related to residual differences between the measured beam positions and the ones provided by the operational settings of the LHC magnets, the factorizability of the proton bunch spatial density functions in the coordinates transverse to the beam direction, and the modeling of the effect of electromagnetic interactions among protons in the colliding bunches. When applying the van der Meer calibration to the entire run periods, the integrated luminosities when CMS was fully operational are 2.27 and 36.3 \(\,\text {fb}^{-1}\) in 2015 and 2016, with a relative precision of 1.6 and 1.2%, respectively. These are among the most precise luminosity measurements at bunched-beam hadron colliders.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Luminosity, \(\mathcal {L}\), is a key parameter at particle colliders. Along with the energy available in the center-of-mass system, it is one of the two main figures of merit that quantify the potential for delivering large data samples and producing novel massive particles. The instantaneous luminosity \(\mathcal {L} (t)\) is the process-independent ratio of the rate R(t) of events produced per unit of time \({\text {d}}t\) to the cross section \(\sigma \) for a given process. The fundamental limitations on precise predictions for these cross sections (e.g., from quantum chromodynamics) motivate the techniques used for luminosity measurements at various types of colliders. The precise determination of the integrated luminosity, \(\int \mathcal {L} (t){\text {d}}t\), has proven particularly challenging at hadron colliders, with an achieved precision typically ranging from 1 to 15% [1]. The “precision frontier” target of 1% [2] does not reflect a fundamental limitation, but rather results from a variety of uncorrelated sources of systematic uncertainty with typical magnitudes of 0.1–0.5%. In this paper, we report the precise determination of the absolute luminosity at the CERN LHC interaction point (IP) 5 with the CMS detector [3], using data from proton–proton (\({\text {p}}{\text {p}}\) ) collisions at \(\sqrt{s}=13\,{\text {TeV}} \) collected in 2015 and 2016.
A central component of the physics program at the LHC consists of measurements that can precisely test the validity of standard model (SM) predictions, e.g., cross sections for the production of electroweak gauge bosons [4, 5] or top quark pairs [6, 7]. A good understanding of the luminosity is critical to minimize the systematic uncertainty in these measurements. The uncertainty in the luminosity measurement is often the dominant systematic uncertainty [5,6,7], motivating an effort to improve its precision.
Stable luminosity information is also crucial to the efforts of the LHC operators to optimize the performance of the accelerator [8, 9]. In this context, it is important to provide luminosity information in real time at a high enough frequency to facilitate rapid optimization. The ability to measure the luminosity of individual bunch crossings (bunch-by-bunch luminosity) is also necessary so that the distribution of number of collisions per crossings is known to the experiments. This information is important when preparing simulations as well as optimization of thresholds to keep event-recording rates near data acquisition design targets.
An absolute luminosity scale is obtained with good accuracy using the direct method of van der Meer (vdM) scans [10,11,12,13]. In these scans, the transverse separation of the two beams is varied over time and the resulting rate of some physical observables (e.g., number of charged particles passing through a silicon detector or energy deposited in a calorimeter) as a function of separation is used to extract the effective beam size. The absolute luminosity at one point in time can then be calculated from measurable beam parameters – namely, the transverse spatial widths of the overlap of the beams and the number of protons in each beam. To achieve the desired accuracy in the absolute luminosity calibration, the vdM scans are typically performed under carefully tailored conditions and with beam parameters optimized for that purpose [1], in conjunction with processing the input from accelerator instrumentation and multiple detector systems. A relative normalization method is then needed to transfer the absolute luminosity calibration to the complete data-taking period. To this end, for a given subdetector, the cross section \(\sigma _{\mathrm {vis}}\) in the “visible” phase space region, defined by its acceptance, is measured for several observables. The integrated luminosity is obtained from the \(\sigma _{\mathrm {vis}}\)-calibrated counts accumulated for a given period of data taking. Changes in the detector response over time can result in variations in \(\sigma _{\mathrm {vis}}\), which could appear as nonlinearity and/or long-term instability in the measured luminosity.
To address these challenges, CMS employs a multifaceted approach, in which measurements from various individual subsystems are used to produce a final luminosity value with high precision, good linearity, and stability. Several methods and independent detectors are used to provide redundancy and to minimize any bias originating from detector effects.
The LHC orbit is divided into a total of 3564 time windows 25\(\text { ns}\) long (bunch crossing slots), each of which can potentially contain a colliding bunch. However, the total number of filled bunch crossings is limited by design to a maximum of 2808 by the choice of the beam production scheme in the injectors and constraints from the rise times of injection and extraction kicker magnets in the various accelerators involved [14]. Furthermore, the length of the injections in 2015 and 2016 was limited by the maximal tolerable heat load in the arcs due to electron clouds (2015) and safety considerations in the LHC injection system with very luminous beams (2016) [15]. The bunch crossings are numbered with an identification number (BCID) in the range 1–3564. The specific pattern of filled and empty bunch crossings used in a single fill is known as the “filling scheme”; a typical filling scheme is composed of long strings of consecutive bunches, up to 72 bunches long, called a “train”, with the individual trains separated by gaps of varying lengths. Generally, filling schemes also include some number of noncolliding bunch crossings, where one beam is filled but the other remains empty; these can be used to study effects from beam-induced background. The two LHC beams are designated “beam 1” and “beam 2”, where beam 1 (beam 2) circulates in the clockwise (counterclockwise) direction, as viewed from above [14].
For Run 2 of the LHC, the period from 2015 to 2018 featuring \({\text {p}}{\text {p}}\) collisions at \(\sqrt{s}=13\,{\text {TeV}} \), the CMS luminosity systems were significantly upgraded and expanded. We report the results for the first two years [16], in which the operational conditions feature a wide range in the number of colliding bunches \(n_{\mathrm {b}}\) and instantaneous luminosity, reaching a maximum of 2232 and 2208, and \(0.5\times \text {10}^\text {34}\) and \(1.5\times \text {10}^\text {34}\,\text {cm}^{-2}\,\text {s}^{-1} \) in 2015 and 2016, respectively. In the majority of \({\text {p}}{\text {p}}\) LHC fills in Run 2, the bunches are spaced 25\(\text { ns}\) apart. The initial Run 2 data set delivered with a bunch spacing of 50\(\text { ns}\) is negligibly small [17], and hence not included in this paper. In this paper, “pileup” refers to the total number of \({\text {p}}{\text {p}}\) interactions in a single bunch crossing, and “out-of-time pileup” refers to additional \({\text {p}}{\text {p}}\) collisions in nearby bunches. For a total inelastic \({\text {p}}{\text {p}}\) cross section of 80\(\text { mb}\) [18, 19], the pileup during nominal physics data-taking conditions in 2015 (2016) extended from 5 to 35 (10 to 50) with an expected average (\(\mu \)) of about 14 (27) \({\text {p}}{\text {p}}\) interactions.
This paper is structured as follows. In Sect. 2 the CMS detector is described with special emphasis on the subdetectors used to derive observables for luminosity estimation, and in Sect. 3 we review the methods to obtain the luminosity information. Section 4 describes the vdM scan calibration method and the associated systematic uncertainty. Sections 5 and 6 outline the corrections applied to the luminosity algorithms and their resulting performance, respectively. Finally, Sect. 7 outlines the sources of corrections and the associated systematic uncertainties, and presents the main results. A summary is given in Sect. 8.
2 The CMS detector
The CMS detector is a multipurpose apparatus designed to study high-\(p_{\mathrm {T}}\) physics processes in \({\text {p}}{\text {p}}\) collisions, as well as a broad range of phenomena in heavy ion collisions. The central element of CMS is a 3.8\(\text { T}\) superconducting solenoid, 13\(\text { m}\) in length and 6\(\text { m}\) in diameter. Within the solenoid volume are – in order of increasing radius from the beam pipe – a silicon pixel and strip tracker of high granularity for measuring charged particles up to pseudorapidity (\(\eta \)) of \(\pm 2.5\); a lead tungstate crystal electromagnetic calorimeter for measurements of the energy of photons, electrons, and the electromagnetic component of hadronic showers (“jets”); and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections, for jet energy measurements. The forward hadron (HF) calorimeter uses steel as an absorber and quartz fibers as the sensitive material. The two halves of the HF are located 11.2\(\text { m}\) from the interaction region, one on each end, and together they provide coverage in the range \(3.0< |\eta | < 5.2\), hence extending the pseudorapidity coverage provided by the barrel and endcap detectors. Outside the magnet, and within the range \(|\eta | < 2.4\), is the muon system [20], which is embedded in the iron flux-return yoke. It is composed of detection planes made using three technologies: drift tubes (DTs) in the barrel, cathode strip chambers (CSCs) in the endcaps, and resistive plate chambers (RPCs) both in the barrel and in the endcaps.
Events of interest for physics are selected using a two-tiered trigger system [21]. The first-level trigger, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100\(\text { kHz}\). The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\(\text { kHz}\) before data storage.
Several subdetectors, although not part of the main CMS data acquisition (DAQ) system, provide additional inputs (e.g., binary logic signals) to the triggering system. The two beam monitors closest to the IP for each LHC experiment, the Beam Pick-up Timing for eXperiments (BPTX) detectors [22], are reserved for timing measurements. They are located on either side of IP 5 at a distance of approximately 175\(\text { m}\). The BPTX system can be used to provide a set of zero-bias events (i.e., events from nominally colliding bunch crossings but without a requirement for any specific activity in the event) by requiring a coincidence between the two BPTX sides. To suppress noise in triggers with high background, the presence of this coincidence is typically required [21].
The knowledge of the integrated luminosity requires stability over long periods of time, and hence benefits greatly from redundant measurements whose combination can lead to an improved precision. To that end, several upgrades were completed during the first LHC long shutdown (LS1), the transition period between LHC Run 1 (2009–2012) and Run 2. The main luminosity subdetectors (luminometers) in Run 1 were the silicon pixel detector and the HF. The HF back-end electronics, which were upgraded during LS1, consist of two independent readout systems: a primary readout over optical links for physics data taking, and a secondary readout using Ethernet links, explicitly reserved for luminosity data. In addition, two other luminometers were designed, constructed, and commissioned: the Pixel Luminosity Telescope (PLT) [23] and the Fast Beam Conditions Monitor (BCM1F) [24]. Finally, a separate DAQ system was developed that is independent of the central DAQ system [21, 25], so that HF, PLT, and BCM1F data, as well as LHC beam-related data, are collected and stored in a time- rather than event-based manner.
The luminometers, along with the accompanying algorithms used to estimate the instantaneous luminosity in Run 2, are briefly described in the following. Figure 1 shows an overview of the position of these luminometers within CMS. A more detailed description of the rest of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, is reported in Ref. [3].
2.1 Silicon pixel cluster counting
The pixel cluster counting (PCC) method, which uses the mean number of pixel clusters in the silicon pixel detector, exploits the very large number of pixels in the inner part of the CMS tracking system. The number of pixels in 2015–2016 was about \(7\times \text {10}^\text {7}\), which means that the probability of a given pixel being hit by two different charged particles from the same bunch crossing is exceedingly small. The mean number of pixel clusters in simulated zero-bias events is of the order of 100 per \({\text {p}}{\text {p}}\) collision, although the precise mean depends on the fraction of the detector used for a given data set. Assuming each pixel cluster comprises five pixels and using a typical pileup for the 2016 running of \(\mu = 27\), the fraction of pixels hit in a typical bunch crossing is roughly:
The probability of accidental overlap between pixel clusters is correspondingly small, and, as a consequence, the number of pixel clusters per bunch crossing is linearly dependent on pileup, and therefore an accurate measure of instantaneous luminosity. Simulated \({\text {p}}{\text {p}}\) collision events that contain only in-time pileup and detector noise are generated using pythia version 8.223 [19] with the CUETP8M1 [18, 26] tune. The simulated events include a full simulation of the CMS detector response based on Geant4 [27]. For the sake of simplicity, the number of pileup interactions present in each simulated event is randomly generated from a Poisson distribution with \(\mu \) up to 50. Figure 2 shows a representative PCC distribution at \(\mu =45\) and the average PCC as a function of \(\mu \). The latter distribution is fitted with a first-order polynomial, assuming no correlations among different values of \(\mu \). Good agreement is seen based on the estimated goodness-of-fit \(\chi ^2\) per degree of freedom (dof) value of about 0.5 [28], indicating linearity under simulated conditions.
Only the components (modules) of the pixel subdetector that are stable for the entire period of data taking are used for the PCC rate measurements, excluding pixel modules known to be defective or significantly affected by the limited size of the readout buffer [29]. The measured \(\sigma _{\mathrm {vis}}\) for PCC, \(\sigma _{\mathrm {vis}}^{\text {PCC}}\), therefore depends on the data-taking period (i.e., one calibration per year).
2.2 Primary vertex counting
The primary vertex counting (PVC) method uses the vertices that have been reconstructed using the tracks in the CMS detector. For this method, a good primary vertex is defined to be one with 11 or more tracks. This requirement is sufficient to suppress spurious vertices [29], and results in better vertex resolution.
The PVC method is simple and robust, but suffers from mild nonlinearity effects when there are many collisions in a single bunch crossing. There are two competing effects. In one effect, primary vertices from two collisions occurring close to one another in space are merged, leading to an undercounting of vertices. In the other effect, the very large numbers of tracks associated with numerous collisions can produce spurious vertices, leading to overcounting. The precision with which these effects are understood falls short of the \(\approx 1\%\) level needed for luminosity studies. However, during vdM scans these effects are minimal because of the very low pileup, and so PVC is very useful as a validation tool for the vdM analysis in the measurement of beam-dependent parameters.
2.3 Forward hadron calorimeter
The HF luminosity measurement uses the separate readout described above, so the measurement can be performed at the full 40\(\text { MHz}\) bunch crossing rate. The back-end electronics upgrade during LS1 added new electronics using field-programmable gate array (FPGA) technology such that several features of the readout were separately programmable for luminosity histogramming, i.e., identifying and counting the readout channels. Although the whole HF is capable of being read out for luminosity use, only the two outer rings in \(\eta \) are used to ensure uniform occupancy and minimize minor nonlinearities expected from simulation.
The computation of the HF observable is based on the occupancy method (HFOC). In this method, the fraction of channel measurements above an analog-to-digital converter (ADC) threshold is used for each bunch slot in a configurable time window. The ADC threshold is set high enough to avoid most noise and as low as possible otherwise. Both the ADC threshold and the integration time of the histograms between readouts are configurable, but they were fixed during data taking in 2015 and 2016. The number of valid measurements is also stored, so the fraction of events with hits above threshold can be computed.
2.4 Pixel Luminosity Telescope
The PLT is a dedicated system for measuring luminosity using silicon pixel sensors, installed in Run 2, at the beginning of 2015. There are a total of 48 sensors arranged into 16 “telescopes”, eight at either end of CMS outside the pixel endcap. Each telescope contains three sensor planes which are arranged in a triplet that faces the IP. The sensors measure \(8{\times }8 \text { mm} ^2\), divided into 80 rows and 52 columns, although only the central region of the sensors is used to reduce the contribution from background. The PLT measures the rate of triple coincidences, where a hit is observed in all three planes, typically corresponding to a track from a particle originating at the IP. The overall mean rate for PLT is estimated using the fraction of events where no triple coincidences are observed (as described in Sect. 3) in order to avoid potential systematic effects from overlapping tracks being counted as a single hit.
2.5 Fast beam conditions monitor
The BCM1F measures luminosity and beam-induced background separately. It consists of a total of 24 sensors mounted on the same carriage as the PLT. Single-crystal diamond sensors are used with split-pad metallization. Each sensor has two readout channels to keep the overall occupancy low, given the experimental conditions in Run 2. The BCM1F features a fast readout with 6.25\(\text { ns}\) time resolution. The precise time measurement allows hits from collision products to be separated from beam-induced background hits, while the incoming background is separated in time from the outgoing collision products due to the position of BCM1F 1.8 m from the center of CMS.
2.6 Drift tube muon detector
The luminosity measurement based on the DT muon detector [20] is based on an efficient trigger on a low-background physics object: muons produced in the CMS barrel. Muon track segments from barrel muon DT stations are sent every bunch crossing to track finder hardware, where tracks are built and later used to generate first level triggers. The number of tracks in time windows of approximately 23 s is read out and stored in a database. These data are used to estimate luminosity. The rate of muons in the DTs is significantly lower than the rate for most other observables from other luminometers. Thus, there are not enough muon tracks during the vdM scans to provide a precise measurement of \(\sigma _{\mathrm {vis}}\), and so the system must be calibrated to the normalized PCC luminosity measurement. On the other hand, the muon candidate rate has been observed to be linear with luminosity and rather stable over time. The luminosity data of this system are integrated over all bunches.
2.7 Radiation monitoring system for the environment and safety
The Radiation Monitoring System for the Environment and Safety (RAMSES) is a monitoring subsystem of the unified supervisory CERN system [30, 31]. There are 10 ionization chambers filled with air at atmospheric pressure that are used as monitors installed in the CMS experimental cavern. They are sensitive to ionizing radiation and can monitor the ambient dose equivalent rate. Thus, they generate alarms and interlocks to ensure the safety of the personnel. This system is maintained and calibrated by the LHC radiation protection group.
While not designed as a luminometer, the two chambers with the highest rates (designated PMIL55X14 and PMIL55X15) have been used to produce a luminosity measurement with good linearity and stability over time. However, similarly to the DT luminosity measurement, the overall rates are too low for bunch-by-bunch measurements or extracting an absolute calibration during vdM scans. The RAMSES luminosity is thus calibrated to the normalized PCC luminosity measurement and is used as an additional measurement for assessing the luminometer stability with time.
3 Luminosity determination algorithms
Each bunch crossing gives rise to a certain number of \({\text {p}}{\text {p}}\) interactions. In a given luminometer, each interaction results in some number of observables (e.g., hits, tracks, or clusters). If one averages over several unbiased measurements, the mean number of observables is
where the average number of interactions per bunch crossing is denoted by \(\mu \), in keeping with the Poisson nature of the underlying probability distribution. Typically these observables are averaged over seconds or tens of seconds.
To measure the instantaneous luminosity, we use the fact that \(\mu \) is proportional to the single-bunch crossing instantaneous luminosity \(\mathcal {L} _{\mathrm {b}}\) via:
where \(\nu _r =11\,245.6\text { Hz} \) is the LHC revolution frequency during collisions, and \(\sigma \) is the total interaction cross section. At the LHC, \(\mathcal {L} _{\mathrm {b}}\) is typically expressed in units of \(\text {Hz}/\upmu \text {b} \equiv 10^{30}\,\text {cm}^{-2}\,\text {s}^{-1} \).
Two algorithms have been developed for extracting the instantaneous luminosity. One method is rate-scaling, where the raw rate of observables is scaled with calibration constants to the luminosity. Rearranging Eqs. (2) and (3), one can estimate the instantaneous luminosity using the average number of observables at a given time:
Luminosity is estimated from PCC, PVC, DT, and RAMSES data using the rate-scaling algorithm.
The second method (zero counting) uses the average fraction of bunch crossings where no observables in a detector are produced. This zero fraction is then used to infer the mean number of observables per bunch crossing. The principal advantage of the zero-counting method is that it is not affected by cases where two or more separate signals overlap in the detector and produce only one reconstructed observable.
Assuming that the probability of no observables in a single collision is p, then the probability of no observables seen in a bunch crossing with k interactions is thus simply \(p^k\). Averaged over a large number of bunch crossings, with the number of interactions per bunch crossing distributed according to a Poisson distribution of mean \(\mu \), the expected fraction of events with zero observables recorded, \(\langle f_0 \rangle \), can be expressed as:
The logarithm of Eq. (5) is proportional to the mean number of \({\text {p}}{\text {p}}\) interactions per bunch crossing, and hence to the \(\mathcal {L} _{\mathrm {b}}\) according to Eq. (3):
The actual value of p does not need to be known beforehand, since it is effectively absorbed in \(\sigma _{\mathrm {vis}}\), although it could be extracted from the measured \(\sigma _{\mathrm {vis}}\) value. The raw inputs from HFOC, PLT, and BCM1F are converted to luminosity using the zero-counting method.
4 Absolute luminosity calibration
Any luminometer requires an externally determined absolute calibration. Approximate \(\sigma _{\mathrm {vis}}\) values can be obtained using Monte Carlo (MC) simulation, but these ultimately rely on theory, i.e., the inelastic \({\text {p}}{\text {p}}\) cross section, and are not expected to be reliable at the percent level that represents the target accuracy for the CMS luminosity measurement. At the LHC, the precision of theoretical predictions for SM processes is typically limited by the knowledge of the parton distribution functions in the proton. Although methods independent of theoretical assumptions have been proposed at the expense of introducing correlations between low- and high-\(\mu \) data-taking periods [32], a more precise and purely experimental method to determine the luminosity is based on the vdM scan technique, which is used in this paper.
Beam-separation scans are therefore performed to obtain calibrated \(\sigma _{\mathrm {vis}}\) for the luminosity measurement. These were pioneered by Simon van der Meer at the ISR [10], extended by Carlo Rubbia to the case of a collider with bunched beams [11], and have been extensively used by all four major LHC experiments [12, 13]. The key principle of the vdM scan method is to infer the beam-overlap integral from the rates measured at different beam separations – provided the beam displacements are calibrated as absolute distances – as opposed to measuring the bunch density functions directly. The basic formalism is described in the following.
4.1 The van der Meer method
The instantaneous luminosity for a single colliding bunch pair in a colliding-beam accelerator is given by:
where \(n_1\) and \(n_2\) are the numbers of particles in each of the two bunches, and \(A_{\mathrm {eff}}\) is the effective area of overlap between the bunches. In general, each of the bunches will be distributed in the plane transverse to the beam direction, in which case \(1/A_{\mathrm {eff}} \) can be replaced by an overlap integral of the bunch densities, i.e.,
where x and y represent the horizontal and vertical coordinates in the plane transverse to the beams, and \(\rho _1\) and \(\rho _2\) are the normalized two-dimensional density distributions for the two bunches. Here, we have integrated over time and the longitudinal coordinate z.
If one assumes that the bunch profiles can be factorized into terms depending only on x and y [10, 11], then \(\rho _i\) can be written as the product of one-dimensional density functions of the form \(\rho _i(x,y) = f_i(x) g_i(y)\) (\(i=1,2\)), and \(1/A_{\mathrm {eff}} \) can be written
where \(W_{\mathrm {eff}}\) and \(H_{\mathrm {eff}}\) are the effective width and the effective height of the luminous region. For the ideal case of Gaussian-distributed bunches with the same width in both beams and undergoing head-on collisions, Eq. (8) reduces to:
where \(\sigma _x\) and \(\sigma _y\) are the root-mean-square (RMS) widths of the horizontal and vertical bunch profiles in either beam, respectively. In the case of round beams, \(\sigma _x=\sigma _y \equiv \sigma _{\mathrm {b}} \equiv \sqrt{\smash [b]{\epsilon _{\mathrm {N}} \beta ^{*}/\gamma }}\), where \(\epsilon _{\mathrm {N}}\) is the so-called normalized emittance, \(\gamma \) the relativistic Lorentz factor, and \(\beta ^{*} \) corresponds to the value of the optical function \(\beta \) at the IP [33].
We designate the luminosity when the beams are displaced with respect to each other by an amount w in the x direction, or an amount h in the y direction, as \(\mathcal {L} (w,h)\). As shown in Ref. [10], when a separation scan is performed in the x direction, in which w is varied in a systematic way from \(-\infty \) to \(+\infty \), the effective width can be determined from:
where common normalization factors have been canceled in the second step. Similarly, if a scan is performed in the y direction, the effective beam-overlap height is given by
For Gaussian-distributed bunches, the resulting scan curves, \(\mathcal {L} (w,0)\) and \(\mathcal {L} (0,h)\), are also Gaussian with RMS widths of \(\varSigma _x= W_{\mathrm {eff}} = \sqrt{2}\sigma _x\) and \(\varSigma _y = H_{\mathrm {eff}} = \sqrt{2}\sigma _y\), yielding
Equations (11) and (12) are quite general, and do not depend on the assumption of Gaussian-distributed bunches. Indeed, it is frequently the case that simple Gaussians do not provide an adequate description of the scan-curve data. In such cases, we use double-Gaussian functions of the form
where \(\epsilon _{x}\) is the fraction of the Gaussian with width \(\sigma _{1x}\). Normally the Gaussian with the smaller width \(\sigma _{1x}\) is considered the core Gaussian, while the Gaussian with the larger width \(\sigma _{2x}\) is used to fit the tails of the scan curve. Similar relations apply for the y coordinate. The effective value of \(\varSigma _i\) (\(i=x,y\)) is then given by
To calibrate a given luminosity algorithm, the absolute luminosity computed from beam parameters via Eq. (13) is used in conjunction with Eq. (3) to obtain
where \(\mu _{\mathrm {vis}}\) is the visible interaction rate. In this analysis, \(\mu _{\mathrm {vis}}\) is taken as the arithmetic mean of the peak values from \(\mathcal {L} (w,0)\) and \(\mathcal {L} (0,h)\) in scans that are performed sufficiently close in time to minimize the impact of varying bunch distributions over the course of a fill. Equation (16) therefore provides a direct calibration of the visible cross section for each algorithm in terms of \(\varSigma _x \varSigma _y\) and \(n_1 n_2\).
In the LHC, bunches typically cross at a small angle \(\phi \) in the horizontal plane at IP 5. This introduces a reduction in the luminosity relative to the case of head-on collisions [1], given by:
where \(\sigma _x\) is the width of the luminous region in the crossing plane and \(\sigma _z\) is the width in the longitudinal direction. For typical LHC physics running conditions in 2016, \(\phi \simeq 140~\upmu \)rad, \(\sigma _x \simeq 12\,\upmu \text {m} \), and \(\sigma _z \simeq 8\text { cm} \), and so the reduction from Eq. (17) is around 10% [34]. The vdM scans are typically carried out under special conditions, where \(\phi =0\), as described in the following. The values of \(\sigma _{\mathrm {vis}}\) do not depend on the crossing angle.
4.2 Analysis of vdM scan data
While \(\sigma _{\mathrm {vis}}\) does not depend on beam conditions, the LHC delivers beams under special conditions to improve the precision of measurements and to reduce systematic effects. The vdM filling schemes are characterized by a low number of colliding bunch pairs at IP 5 (\(n_{\mathrm {b}} =30\)–50). The bunches are widely separated from each other in the LHC orbit, to reduce the effect of afterglow (as discussed in Sect. 5.1). Special beam optics with \(\beta ^{*} \approx 19\text { m} \) and transverse emittance of \(\epsilon _{\mathrm {N}} \approx 3.0\,\upmu \text {m} \) are implemented to produce a relatively large bunch size of approximately \(\sigma _{\mathrm {b}} = 100\,\upmu \text {m} \). Large bunches reduce the impact of vertex reconstruction resolution in analyses where vertex positions are utilized. A crossing angle of 0 is used for collisions at IP 5 in vdM scans. To minimize the effect of potential nonlinear response in the luminometers, the target pileup is set to \(\mu \approx 0.6\), which is 1–2 orders of magnitude lower than typical physics fills. To achieve that goal, in addition to the large beam size, the beams have relatively low intensities, which typically begin at (8–9)\(\times \text {10}^\text {10}\) protons per filled bunch, resulting in a total intensity of (3.5–4.0)\(\times \text {10}^\text {12}\) per beam for 44 bunches.
The total beam intensities are measured with the DC current transformers (DCCT) [35], and the bunch currents measured with the fast beam current transformers (FBCT) [36], and cross-checked with the longitudinal density monitors (LDMs) [37, 38] and the beam quality monitors [39]. Because of the low beam intensity and low collision rate, the luminosity remains nearly constant over the course of time, in contrast to typical physics fills [9]. The beam orbit is monitored using two systems, the Diode Orbit and Oscillation (DOROS) beam position monitors (BPMs) [40] located near IP 5, and the BPMs located in the LHC arcs adjacent to CMS (referred to as “LHC arc BPMs”). The latter are transformed to a beam position at IP 5 using the LHC optics files that are centrally provided by LHC operators [41]. The orbit is also tracked using the movements of the luminous region at IP 5 based on the vertices reconstructed with the CMS tracker.
The vdM scan program at IP 5 consists of a series of x-y scan pairs. Figure 3 shows the progression of these scans in a calibration fill, with the beam displacement measured by the DOROS BPMs [40, 42].
Typical scan sessions consist of at least three vdM scan pairs, with one scan in each of the transverse coordinates per pair. There are two at the start of the fill and another at the end of fill. In the absence of systematic effects, all scans are expected to produce compatible results. In each pair, the scans are typically performed first in the x and then in the y direction, although sometimes the pair is performed in the opposite order. In the vdM scans, the two beams are separated by up to 6\(\sigma _{\mathrm {b}}\), and scanned across one another in a sequence of 25 steps of 30\(\text { s}\) each to obtain a statistically significant measurement.
Dedicated length scale calibration (LSC) scans (described in Sect. 4.3.4), which are used to calibrate the distance by which the steering magnets displace the beams, are also performed typically close in time to the rest of the scans and using the same collision optics configuration. The LSC implemented at IP 5 is of a constant-separation type, in which the two beams are positioned at \(-2.5\) and \(-1.5\sigma _{\mathrm {b}} \) relative to nominal and moved together forward in steps of 1\(\sigma _{\mathrm {b}}\), maintaining the 1\(\sigma _{\mathrm {b}}\) separation between the two beams, until they reach the \(+2.5\sigma _{\mathrm {b}} \) point. Then, their positions are swapped, and they are moved together backward in \(-1 \sigma _{\mathrm {b}} \) steps back to \(-2.5\sigma _{\mathrm {b}} \). The scan is performed once in the x direction and once in the y direction, with a total of 10 steps of 60\(\text { s}\) in each direction. The LSC scans are performed with successive forward and backward displacements for multiple measurements under slightly different conditions in case there are compounding effects that limit precision. The transverse position of the luminous region is needed for this calibration and is measured using reconstructed primary vertices in CMS data.
To test the assumption of transversely factorizable bunch profiles in Eq. (9), four dedicated beam-imaging (BI) scans are performed, one for each beam and each transverse direction. One beam is kept fixed at its head-on position, while the other is moved and scanned in 19 steps from \(-4.5\) to \(+4.5\sigma _{\mathrm {b}} \) along x or y with a duration of 40\(\text { s}\) per step. Primary vertices are reconstructed, and their positions are then analyzed to perform a global fit to derive the transverse bunch density distributions of the beams (as discussed in Sect. 4.4.1). The BI scans are also analyzed as regular beam-separation scans. During both BI and regular vdM scans, the transverse bunch density distributions are also determined by simultaneously fitting the beam-separation dependence (“evolution”) on the luminosity and the luminous region position, orientation, and spatial width, as reflected in the reconstructed primary vertices (as discussed in Sect. 4.4.2).
The LHC conditions at IP 5 for the luminosity calibration fills discussed in this paper for 2015 and 2016 are summarized in Table 1.
Pixel data are collected for PCC and for methods involving collision vertices using a zero-bias trigger, which collects data from five BCIDs with a total rate of approximately 20\(\text { kHz}\). Figure 4 shows vdM scan data from PCC recorded in the fifth scan pair of the session in fill 4954. The fit function corresponds to the double-Gaussian formalism of Eq. (14), and the parameters are estimated by simultaneously fitting the PCC and PVC rate measurements. An additional constant term is included to estimate the background originating from noncollision sources. This function provides a good description of the data in a range that extends over nearly three orders of magnitude in rate (\(\chi ^2/\text {dof}\approx 1\) in Fig. 4). For other luminometers, background rates are either negligible (PLT and PVC) or estimated and subtracted (BCM1F and HFOC) prior to the beam parameter fit. Since the instantaneous luminosity is relatively low, any nonlinear effect has a negligibly small impact in any method. The beam-width parameters (Eq. (15)) measured using different luminometers are in excellent agreement, which is shown in Fig. 5 with comparisons of \(A_{\mathrm {eff}}\) with the nominal PCC+PVC results.
Although the accelerator parameters, such as bunch transverse sizes or intensities, vary during the course of a fill, such changes cancel in the calculation of \(\sigma _{\mathrm {vis}}\), which should remain invariant. This is shown in Fig. 6 for the measured \(\sigma _{\mathrm {vis}}^{\text {PCC}}\) as a function of time for vdM scans taken in fills 4266 and 4954. After including all the effects described in Sect. 4.3, \(\sigma _{\mathrm {vis}}^{\text {PCC}} =9.166\pm 0.056\,\text {(stat)} \) and \(8.429\pm 0.029\,\text {(stat)} \text { barns} \) in 2015 and 2016, respectively, where the bunch-by-bunch fit uncertainty in \(\varSigma _x\), \(\varSigma _y\), and \(\mu _{\mathrm {vis}}\) is propagated to the measured \(\sigma _{\mathrm {vis}}^{\text {PCC}}\) per scan. Since these uncertainties are statistical in nature, they contribute to the scan-to-scan combination in an uncorrelated way. The assumption of factorizable proton bunch densities limits the level of accuracy in the luminosity scale inferred from Eq. (13). A common approach is thus adopted at the LHC that includes a dedicated tailoring of the proton bunch injection chain to minimize the emergence of non-Gaussian bunch density distributions [43]. Since the factorizability between the x and y distributions could impact the vdM scan result of the different IPs differently, CMS reconstructs the individual proton bunch densities during the BI and vdM scans, as described in Sect. 4.4.
4.3 Corrections to vdM scan data
Several systematic effects can change the measurement of \(\sigma _{\mathrm {vis}}\), and the following sections describe the measurement of these effects, the corrections used, and the resulting systematic uncertainty in \(\sigma _{\mathrm {vis}}\).
Adjustments to the bunch-by-bunch charge measurement are made to correct for spurious charge that is present outside the nominally filled part of the slot (Sect. 4.3.1). Then, we correct for potential sources of bias associated with the beam position monitoring at the scale of \(\upmu \)m. We distinguish between “orbit drifts”, which we model with smooth, linear functions, and residual differences relative to the nominal beam positions, where corrections per scan step are assessed. Since both effects are time dependent, thereby biasing \(\sigma _{\mathrm {vis}}\) incoherently, they are monitored continuously during each scan (Sect. 4.3.2).
Another source of correction originates from the electromagnetic interaction between charged particles in the colliding bunches (beam-beam effects); when the beams are displaced, rather than being head-on, a beam deflection and change in \(\beta ^{*}\) may be induced. The former causes the beams to be more separated than the nominal value from LHC beam position estimates, whereas the latter influences the spatial distributions of proton bunches and thus the observed rate. The resulting corrections to \(\sigma _{\mathrm {vis}}\) are evaluated at IP 5 [44, 45], and depend on the LHC optics, beam parameters, and filling scheme (as discussed in Sect. 4.3.3).
The vdM method requires an accurate knowledge of the beam separation. Possible differences in the absolute scale between the nominal beam separation produced by the steering of the LHC magnets, as used in Eqs. (11) and (12), and the actual separation are determined by using the LSC procedure (Sect. 4.3.4).
4.3.1 Beam current calibration and spurious charge
The LHC beam currents are measured by dedicated devices. The FBCT system is used to measure the current of individual bunches in 25\(\text { ns}\) bunch slots. The DCCT system provides a precise (0.2%) measurement of the total current for each of the two beams; since it is more precise than the FBCT sum, its scale is used to normalize the sum of the FBCT measurements.
Both the DCCT and FBCT measurements are sensitive to additional charges outside the actual colliding bunch. These components must be measured and subtracted. The LHC radio frequency (RF) cavities operate at 400\(\text { MHz}\), so a single 25\(\text { ns}\) wide bunch slot contains ten 2.5\(\text { ns}\) wide “RF buckets”. Only one RF bucket in a given bunch slot is filled with protons, and, in principle, the other nine RF buckets are empty. Similarly, of the total 3564 bunch slots, only a predefined subset is filled, according to the filling scheme. In practice, however, a small amount of spurious charge is present in the nominally empty RF buckets and bunch slots, which should be subtracted from the \(n_1\) and \(n_2\) values in Eq. (13). The amount of “ghost” charge in the nominally empty bunch slots is included in the DCCT but not in the FBCT measurement, since the latter is insensitive to bunch charges below a certain threshold. The out-of-time (satellite) charge occupies RF buckets adjacent to the main bunch. As such, it can experience long-range interactions with the main bunch in the other beam and is visible in the FBCT measurement. The corrected value for \(n^j\) (where j denotes the BCID) is therefore given by:
where \(f^j_{\mathrm {sat}}\) represents the per-bunch correction due to the satellite bunch population and \(f_{\mathrm {ghost}}\) is the correction for the ghost charge.
The spurious charge is measured by the LHC LDM system, which provides a precise longitudinal distribution of the beam charge with a time resolution of 90\(\text { ps}\). The data from the LDMs for fills 4266 and 4954 indicate that both the ghost and satellite charges are small. The latter is estimated to be \(< 0.1\%\) for each of the two beams and is neglected. No particular time dependence for either beam is observed, and the resulting overall spurious-charge correction in \(\sigma _{\mathrm {vis}}\) amounts to \(+\)0.2 and \(+\)0.3% in 2015 and 2016, respectively. This is applied as a correction to the beam currents in Eq. (16).
The ghost charge is also measured using the beam-gas imaging method [12, 46, 47], which compares the beam-gas rates in bunch crossings at IP 8 (the location of the LHCb detector) where only one beam contains protons, or where neither beam contains protons, leading to consistent results with the LDM measurement. The systematic uncertainty of 0.1% is assigned to cover the difference between the two estimates of the ghost contributions to the beam current.
4.3.2 Beam position monitoring
Although the LHC beam orbits are generally stable during a fill, even a small variation (either random or systematic in nature) in the beam positions during scans can significantly affect the resulting calibrations. The beam positions are measured primarily using the DOROS BPM system. The LHC arc BPMs, when possible, are used to confirm the stability of the orbits during the scan.
To measure the orbit drift, we use the beam position measurements in x and y in three 15\(\text { s}\) periods when the beams are nominally colliding head-on: immediately before and after each scan, as well as at the middle point of the scan, where the beams are also head-on. For each scan, a fit using a first-order polynomial is performed from the point before the scan to the middle point, and it is used to derive the correction for the first half of the scan. Similarly, a fit from the middle to the point after the scan is used to correct the second half of the scan. Figure 7 shows the measured positions along with the resulting fits. In general, the orbit drift during the 2015 and 2016 vdM scans is less than about 5\(\,\upmu \text {m}\) for most of the scans. However, in the third scan of both series, the orbit drift was significant enough to shift \(\sigma _{\mathrm {vis}}\) by approximately \(+\)1.0%. The corrections are derived using the average of the two BPM systems, and the largest deviation of the correction from each individual system from the nominal correction is taken as the value of the systematic uncertainty due to orbit drift. This is typically 0.1–0.2 % overall.
At each scan step, the actual beam separation can be also affected by systematic or random deviations of the beam positions from their nominal settings, which, in turn, impact the observed rate at each scan point. The magnitude of this potential bias is evaluated from consecutive single-beam orbit measurements at IP 5, provided by the DOROS BPMs and with a duration of a few seconds each. They are further corrected for the beam-beam effects (as discussed in Sect. 4.3.3) and the length scale (as described in Sect. 4.3.4) using the position of reconstructed vertices as the calibration target. The impact from beam-beam deflection at the location of the DOROS BPMs (\(z_{\mathrm {DOROS}}=\pm 21.5\text { m} \) away from IP 5) is magnified by a factor of \(1+\tan \left( \pi Q_{x/y}\right) z_{\mathrm {DOROS}}/\beta ^{*} \), where \(Q_{x}\) and \(Q_{y}\) are the betatron tune values in the x and y directions [33]. Because these values are different, the resulting factors are 2.7 in the x direction and 2.8 in the y direction. The measurements from the DOROS BPMs are integrated over all bunches. Therefore, the observed beam-beam deflection may be overestimated because of the inclusion of noncolliding, nondeflected bunches. In this analysis, a reduction factor of 0.6 is thus applied on top of the geometric factor in both years, which is the approximate fraction of the total number of bunches in the vdM fills that collide at IP 5. The orbit drift, as described above, is also subtracted from the single-beam DOROS measurements before forming the actual beam separation. Finally, an additional length scale correction is made to DOROS data for each beam and in both of the two transverse directions. The calibration using vertices, both for DOROS and nominal LHC positions, determines only the average length scale for the two beams. The calibrations of each beam are also not necessarily the same for the two sets of data. Therefore, a final, relative calibration of the DOROS data is made to align each beam in both transverse directions to the scale of the LHC beams. Figure 8 shows the residual difference in beam separation in all y scans in 2015 and 2016 as well as the residuals per beam in a single scan, which shows symmetric behavior. The resulting impact on \(\sigma _{\mathrm {vis}}\) is in the range \(-\)0.6 to \(+\)0.4 and \(-\)0.5 to \(-\)0.2%, with average values of \(-\)0.1 and \(-\)0.3%, in 2015 and 2016, respectively. Corrections are applied for each scan, and the uncertainty comes from the reduction factor in the beam-beam deflection correction at the location of the DOROS BPMs.
4.3.3 Beam-beam effects
We distinguish two types of beam-beam interactions that affect the vdM and BI scan measurements: coherent and incoherent beam-beam effects. The total correction originates from the combination of both effects, which affect the nominal beam separation (coherent) and the detector rate (incoherent) via the change of the beam shapes.
The closed orbits of the bunches in the scans are shifted coherently by the angular kick induced by their electromagnetic repulsion, resulting in an increase in the absolute beam separation. The size of this additional beam-beam deflection depends on the transverse beam size, bunch intensities, collision optics, and separation between the orbits of colliding bunches. It is calculated based on the Bassetti–Erskine formalism for the electric field of elliptically distributed bunches, as discussed in Ref. [48]. The orbit shift depends linearly on the separation for small nominal beam separations, reaches a maximum near 2\(\sigma _{\mathrm {b}}\) (\(\approx 0.2\text { mm} \) in fills 4266 and 4954), and decreases nonlinearly towards zero at larger separations. Figure 9 (left) [44, 45] shows the resulting correction as a function of nominal beam separation, for the conditions during the scans in fill 4954 (Table 1). The beam-beam deflection correction increases the \(\varSigma _x\) and \(\varSigma _y\) values, impacting the \(\sigma _{\mathrm {vis}}\) measurement by about \(+\)2.0 (\(+\)1.6)% in 2015 (2016).
The incoherent effect corresponds to the change of the proton bunch density distribution functions \(\rho (x,y)\) at the IP due to deflection at the per-particle level. It causes a change in the effective \(\beta ^{*}\), and thus results in a change in the measured luminosity. This dynamic evolution of \(\beta ^{*}\) is usually referred to as the “dynamic-\(\beta \)” effect. The correction for the dynamic-\(\beta \) effect is evaluated numerically by using a dedicated particle tracking program that calculates \(A_{\mathrm {eff}}\) under different hypotheses [44, 45]. Considering the dynamic-\(\beta \) effect independently of the beam-beam deflection, we obtain the ratio of the detector rate as shown in Fig. 9 (right). At vdM conditions, the dynamic-\(\beta \) correction can be up to about \(-2\%\) at large values of beam separation. Figure 9 shows the effect is typically larger at higher beam separation. In contrast to the beam-beam deflection, the dynamic-\(\beta \) correction thus decreases the original \(\varSigma _x\) and \(\varSigma _y\) values. The corresponding impact on the calculated \(\sigma _{\mathrm {vis}}\) is about −1.7 (−1.4)% in 2015 (2016).
The total beam-beam correction (i.e., when both the beam-beam deflection and dynamic-\(\beta \) effects are included) results in an increase in the calculated \(\sigma _{\mathrm {vis}}\) of about 0.3 (0.2)% in 2015 (2016) at IP 5. In addition, when considering further head-on collisions at the IP at the opposite side of the ring (IP 1 at ATLAS), the effect is approximated as a single-IP simulation but with shifted betatron tune values. The impact on \(\sigma _{\mathrm {vis}}\) is enhanced by a factor of about two, leading to a total beam-beam correction of \(+\)0.6 (\(+\)0.4)% in 2015 (2016). The uncertainty in this calculation is dominated by the uncertainty in the betatron tune values, which was estimated taking into account the symmetric tune spread as well as the full shift due to head-on collisions at a second interaction point (in ATLAS at IP 1). These considerations translate into an uncertainty of 0.5 % in the corrected \(\sigma _{\mathrm {vis}}\) [44, 45].
4.3.4 Length scale calibration
In the canonical vdM formalism described in Sect. 4.1, it is implicitly assumed that the beam separation is perfectly known. Operationally, the nominal displacement of the beams at the IP is achieved based on a local distortion (bump) of the orbit using a pair of steering dipoles located on either side of the IP [49]. The size of the nominal separation is subject to potential uncertainty associated with the response of the steering dipoles themselves (e.g., magnet hysteresis) or lattice imperfection [41], i.e., higher multipole components in the quadrupoles located within those orbit bumps. For a given IP, there are four possible bumps, for the two possible displacement directions of the two beams.
An accurate calibration for the size of the bumps can be obtained using the CMS tracker. In particular, for small vertex displacements, the uncertainty in the reconstructed vertex position in x or y is \(\approx 20\,\upmu \text {m} \) for zero-bias collisions [29]. During LSC scans, the data for each separation distance contains several hundred thousand reconstructed vertices, yielding a position measurement with submicron precision.
The vdM scans described in Sect. 4.2 are typically done by moving the beams in equal steps in opposite directions. Since the two beams have independent length scales, the full separation correction is obtained from the mean of the length scale corrections per beam. Separate scans, wherein both beams are moved in steps in the same direction, are thus required to obtain the LSC. A more detailed description on the relationship between the calibration constant associated with the “offset” (i.e., the arithmetic mean between the transverse beam positions) and the observed quantities during LSC scans can be found in Ref. [12]. Here, for each scan step, the centroid of the luminous region is measured as the mean from a Gaussian fit to the observed vertex positions. A calibration constant for each transverse direction is extracted with a first-order polynomial fit to the difference between the measured mean position and the nominal offset as a function of the latter. This constant corresponds to the average calibration of the bumps of the two beams. It is then applied as a scale factor to correct the nominal beam displacement.
The nominal offset is also affected by the random and systematic beam position deviations described in Sect. 4.3.2. The beam positions at each step are monitored using DOROS BPMs. We estimate the arithmetic mean of the measured step sizes as a good representation of the nominal settings, after excluding outlier step sizes based on an iterative procedure. The difference of the remaining step sizes from the mean is used to correct the nominal offsets, and their standard deviation is the uncertainty due to beam position deviations. The correction improves the quality of the first-order polynomial fits and the forward-to-backward scan agreement. Consistent results are also found using the LHC arc BPMs to derive the correction for beam position deviations.
The fit results for the x and y LSC scans are shown in Fig. 10 for fill 4945. The difference between the measured displacement of the beam centroid and the nominal displacement of the beams, corrected for the estimated beam position deviation, is plotted as a function of the latter. In all cases, the data are well described by first-order polynomial fits with calibration constants differing on average from zero by \(-0.3\) and \(-0.8\%\) in the horizontal plane in 2015 and 2016, respectively, and by \(-0.1\) and \(-0.5\%\) in the vertical plane. The combined correction to the visible cross section is \((-0.4 \pm 0.2)\) and \((-1.3 \pm 0.3)\%\). The total uncertainty, equal to the uncertainty contributions from the x and y planes added in quadrature, includes the statistical uncertainty in the first-order polynomial fits (\(<0.1\%\)), the variation between the two scan directions and the different scans (0.1%), a tracker alignment uncertainty (\(<0.1\%\)), and the uncertainty from the estimated beam position deviations (0.1–0.2%).
4.4 Transverse factorizability
The use of the vdM scan technique to measure \(A_{\mathrm {eff}}\) relies on the assumption that the proton bunch density functions are factorizable into x- and y-dependent components, as described in Sect. 4.1. If this condition is not met exactly, the measurements of \(A_{\mathrm {eff}}\) and \(\sigma _{\mathrm {vis}}\) will be biased. To correct for this potential bias, the bunch density distributions are measured independently with two methods, which are used in a combined way to evaluate \(A_{\mathrm {eff}}\). In both methods, primary vertices are reconstructed from tracks measured in the CMS silicon tracker.
4.4.1 Beam-imaging method
In the BI method [50, 51], the distributions of reconstructed vertices during BI scans are used to obtain an image of the transverse bunch profiles integrated over the scanning direction. A primary vertex resolution comparable to or smaller than the transverse beam sizes is necessary to extract the beam profiles from the measured distributions. The two-dimensional distribution in x and y of the reconstructed vertices depends on the overlap of the bunch density functions, their transverse separations \(\varDelta x\) and \(\varDelta y\), and the vertex resolution V of the CMS tracker system as:
The combination of the vertex distributions from all steps of the BI scan in the x direction is approximated as:
Here, \((\mathcal {M}_x \rho _2)(y)=\int \rho _2(x,y){\text {d}}x\) denotes that the proton bunch density of the second beam appears marginalized in the direction of the scan. This results from the assumption that the step size is small enough with respect to the width of the bunch densities, so we can replace the sum over discrete scan points with a continuous integral over \(\varDelta x\). This two-dimensional vertex distribution can be exploited to constrain the transverse correlations of the bunch density of the first beam.
Combining four such vertex distributions accumulated during the BI scan set, we reconstruct the two-dimensional proton bunch densities of the two beams from a simultaneous fit. This requires knowledge of the primary vertex resolution, which is modeled with a two-dimensional Gaussian function. Convolving with the primary vertex resolution is then analytically possible for bunch density models built from Gaussian functions.
Models for the proton bunch density are built from Gaussian distributions parameterized with an additional correlation parameter \(\varrho \):
where j indicates the beam number (\(j=1\) or 2). More complicated models are constructed with sums of these individual correlated Gaussian distributions. Distributions with a wide tail are better described by adding a Gaussian component with a small weight and a large width. Distributions with a flattened central part can be modeled with an additional component with a small negative weight and a narrow width. Typically, both nonzero correlation parameters and different widths are required to describe the nonfactorizability observed in data.
The best description of the BI data collected in 2015 and 2016 for the five bunch crossings used is achieved consistently with a sum of three Gaussian distributions, where the narrow component has a negative weight:
Figure 11 shows the two-dimensional pull distribution, i.e., \((N^{\text {vtx}}_{\mathrm {data}}-N^{\text {vtx}}_{\mathrm {fit}})/\sigma _{\mathrm {data}}\), and the one-dimensional projections for the vertex distributions collected in the BI scan where the first beam is moved vertically for one bunch crossing in fill 4954. In these fits, the effects from the beam-beam deflection and dynamic-\(\beta \) are included in the positions of the reconstructed vertices and as per-vertex weights, respectively, whereas the impact of orbit drift is negligibly small.
The value of \(A_{\mathrm {eff}}\) can then be calculated from an integration of the overlap of the bunch densities directly (i.e., \(A_{\mathrm {eff}} =\iint \rho _1(x,y)\rho _2(x,y){\text {d}}x{\text {d}}y\)). This is compared to the value of \(A_{\mathrm {eff}}\) obtained from an MC simulated vdM scan pair generated with the reconstructed bunch densities as input, and analyzed with the vdM method (i.e., \(A_{\mathrm {eff}} =1/(2\pi \varSigma _x^{\text {MC}}\varSigma _y^{\text {MC}})\)). The difference between the two values yields the bias of the vdM results, and is applied as a correction to \(\sigma _{\mathrm {vis}} \) values. The bias is computed separately for each bunch crossing, and the results are shown in Fig. 12. The values for the estimated bias are averaged, resulting in a correction of \(+\)1.3 (0.9)% in \(\sigma _{\mathrm {vis}}\) for 2015 (2016) because of the assumption of x-y factorization.
To estimate the uncertainty in the measured bias, the MC simulation of the vdM scans is repeated multiple times and the RMS of the resulting biases is 0.1% for both years, which is considered as the statistical uncertainty in the vdM scans. Additionally, a systematic uncertainty is evaluated with a closure test: simulated models are constructed by randomly drawing parameters of the fit model in Eq. (22). These are used to simulate MC pseudo-experiments by generating BI scan data, which are then fitted with the same model and procedure. Comparing simulated models with fit quality and fitted correction values similar to the data fits, the bias obtained from the bunch densities reconstructed from the fit agrees well on average with the true bias of the simulated model. The RMS of the distributions of deviations is 0.5 % for both years. We assign this RMS as the systematic uncertainty.
4.4.2 Luminous region evolution
In this method, which was inspired by Ref. [13], the luminosity and luminous region geometry are used to reconstruct the bunch density distributions in three dimensions and as a function of time. Using single-beam parameters, described in the following, bunch profiles are then generated for simulated vdM scans and treated as genuine vdM scan data. Similar to the BI method, the impact of factorization is extracted by comparing the “measured” luminosity extracted from the one-dimensional vdM simulated bunch profiles with the “true” luminosity from the computed four-dimensional (x, y, z, t) overlap integral of the single-bunch distributions. The luminous region is modeled by a three-dimensional ellipsoid whose parameters (nine in total) are extracted from an unbinned maximum likelihood fit of a three-dimensional Gaussian function to the spatial distribution of the primary vertices [29]. The vertex resolution is determined from data as part of the fitting procedure.
The bunch profiles \(\rho _j(x,y,z)\), parameterized per beam j, are the sum of three individual Gaussian distributions \(g_{j,1\ldots 3}(x,y,z)\) with common mean, but arbitrary width and orientation parameters (referred to as “bunch parameters” in the following):
The overlap integral of Eq. (23) is evaluated at each scan step to predict the true luminosity and the geometry of the luminous region for a given set of bunch parameters. In this calculation, we consider the impact of beam-beam effects, LSC, and orbit drifts. The bunch parameters are then adjusted according to a \(\chi ^2\) minimization procedure to determine the best-fit centroid position, orientation, and the widths (corrected for the primary vertex resolution) of the luminous region measured at each step of a BI or vdM scan. An example of a fit to the PCC luminosity and luminous region geometry is illustrated in Fig. 13 for one of the horizontal scans in fill 4954 and a subset of the three-dimensional ellipsoid parameters. One of the four figures shows the variation in the beam width in y during the x-separation beam scan, which is indicative of nonfactorization. The goodness of fit is better than \(\chi ^2 / \text {dof} = 1.8\) for both years, with some systematic deviations being apparent mainly in the tails of the scan. The fits are repeated by substituting PLT as the luminosity input, but no particular dependence is seen.
This procedure is applied to all (i.e., BI and vdM) scans in fills 4266 and 4954, and the results are summarized in Fig. 14. The \(\sigma _{\mathrm {vis}}\) extracted from the standard vdM analysis with the assumption that factorization is valid is smaller by 0.6–1.1 (0.2)% than that computed from the reconstructed single-bunch parameters in fill 4266 (4954). Similar to the evaluation in the BI method, the uncertainty amounts to 0.6 %. This uncertainty is dominated by the standard deviation in simulation-driven closure tests, and includes the fit uncertainty in data and the contributions from beam-beam effects, length scale, and orbit drift. These observations are thus consistent with the ones obtained in Sect. 4.4.1 in terms of absolute magnitude during the BI scans. The two results are combined to produce the final correction in \(\sigma _{\mathrm {vis}}\) of \(+(0.8 \text {--}1.3 \pm 0.5)\) and \(+(0.6 \pm 0.5)\)% in 2015 and 2016, respectively. The final corrections retain the time evolution derived uniquely from the luminous region evolution method.
5 Rate corrections under physics running conditions
The calibration scans described in the previous sections are performed with a small number of well-separated proton bunches with low bunch intensity. In contrast, during nominal conditions, the collision rate is generally maximized to produce large data sets for physics measurements and searches. This section describes the corrections that are applied to uncalibrated luminometer rates to ensure that the final luminosity values are accurate. These corrections, summarized in Table 2 for 2016, compensate for out-of-time pileup, efficiency, and nonlinearity effects for each individual luminometer.
5.1 Out-of-time pileup corrections
The measurements in most detectors have out-of-time pileup contributions that do not arise from the in-time \({\text {p}}{\text {p}}\) collision within the 25 ns window of the bunch crossing. Ideally, these contributions should be subtracted from all bunch crossings before the total instantaneous luminosity is computed. There are generally two types of effects that are considered: spillover of electronic signals and real additional response from material activation. These are denoted as type 1 (T1) and 2 (T2) afterglow, respectively.
The T1 afterglow generally only impacts the following bunch crossing because electronic signals tend to decline exponentially and hence two bunches later (50\(\text { ns}\)) the signal is again below threshold. The T1 contribution in bunch \(n+1\) from bunch n is proportional to \(\mathcal {L} _{\mathrm {b}} (n)\). Thus, the model for the correction is:
where \(\alpha _{\mathrm {T1}}\) is detector dependent and sometimes time dependent; \(\alpha _{\mathrm {T1}}\) ranges from 0.005 for BCM1F to 0.02 for HFOC to as large as 0.09 for PCC.
In contrast, T2 afterglow tends to impact all bunch crossings, because the half-life of the activated material can be longer than several bunch crossings. The response can be modeled with a single- or double-exponential distribution. The impact of T2 afterglow varies by filling scheme and by detector. In fills where \(n_{\mathrm {b}}\) is low and where the bunches are well separated, the T2 corrections are very small and often completely negligible, as is the case by design in the vdM calibration fills. When LHC fills contain several hundred bunches, the corrections start to contribute at the percent level in most bunches. With maximally full filling schemes, the corrections can be up to about 4 (15)% for PCC (HFOC).
Although there are clearly two distinct components, a combined (T1 and T2) model can be constructed that gives the response for a specific bunch crossing, accounting for contributions from all other 3563 bunch crossing slots. This model is referred to as the single-bunch response (SBR). The SBR for HFOC luminosity is taken directly from data in a reference fill with \(n_{\mathrm {b}} =2\) for approximately the first half of the bunch crossings, and the bunches in the second half are smoothly extrapolated using an exponential model. The SBR is normalized to \(\mathcal {L} _{\mathrm {b}} (n)\) and it is then subtracted from all other bunch slots. This procedure is repeated for all bunch crossings.
After the corrections from the SBR are applied, empty bunch slots, where there are no collisions, should have a rate of zero. For PCC, the SBR is determined by optimizing \(\alpha _{\mathrm {T1}}\), which is time dependent and measured in intervals of about 20\(\text { min}\), and the parameters of the exponential used for T2 corrections, such that there is minimal residual rate in the noncolliding bunch slots. Figure 15 shows per-bunch data in a fill from 2016 before and after the afterglow corrections for PCC are applied.
The empty bunch slots are also used to estimate the residual afterglow after the full set of corrections is applied. The corrected rate in the first empty bunch slot after a colliding bunch slot is used to estimate the residual T1 response. Likewise, the 2nd to 30th empty bunch slots are used to estimate the residual T2 effect. This procedure is performed for the entire 2015 and 2016 data sets for PCC and HFOC luminosity measurements. A window covering all residuals over the course of each data set is used as the systematic uncertainty in the final corrections. The resulting uncertainty for PCC in the two corrections is \(0.3 \bigoplus 0.1 \) (\(0.3 \bigoplus 0.3 \))% in 2015 (2016).
These types of per-bunch luminosity corrections are applied for PCC, HFOC, and BCM1F, whereas PLT is almost completely background free and no such correction is needed. Since the DT and RAMSES measurements integrate over all bunch crossings, out-of-time pileup corrections can only be applied on average to the integrated rates. For DT these amount to 0–1%, while no corrections are applied to RAMSES.
A second type of T1 afterglow affects the HFOC luminosity. This is the case where the afterglow from a preceding bunch and the signal from the current bunch are both under the threshold to be counted as a hit, but their sum exceeds the threshold. This effect is referred to as the “bunch train effect”, because it affects only active bunches preceded by other active bunches (that is, bunches within a train, as opposed to “leading” bunches at the beginning of a train). The method previously described for estimating T1 afterglow does not include this contribution. This effect is measured in a dedicated study comparing the double ratio of the leading bunch in a train relative to the second bunch for HFOC divided by the same ratio for PCC. A single correction model with magnitude 1–5%, linearly increasing with instantaneous luminosity, is determined utilizing most valid data from 2016.
5.2 Efficiency corrections
Radiation damage can affect the detector response by reducing efficiency, increasing noise, or both. Noise is typically a small effect for most luminometers, but reduced response in detectors due to radiation damage can have significant (percent-level) effects, and so corrections are required. Corrections are measured against a stable benchmark relative to the performance at or near the vdM scans, and are applied to \(\sigma _{\mathrm {vis}}\). Shifts of 0–10% in detector response in the PLT in 2016 are corrected using RAMSES as a benchmark, whereas the impact of radiation damage on the HF efficiency is corrected using a parameterization derived from a model of aging. An HF efficiency correction of 1% is derived by measuring the average energy deposits in the HF in events characterized by the presences of \({\text {Z}}\)bosons that decay to two muons with large transverse momentum.
A further efficiency correction is necessary for the PCC measurement. The pixel detector has a static internal memory buffer for data storage before the trigger decision is taken. When the buffer is filled, the oldest data overflows and are lost. This effect is proportional to the total instantaneous luminosity, and it can be estimated by studying the frequency of missing pixel clusters in otherwise well-reconstructed tracks [29]. In 2016, the effect was 1.0% at 1.4\(\times \text {10}^\text {34}\) \(\,\text {cm}^{-2}\,\text {s}^{-1}\). A correction proportional to total instantaneous luminosity is applied, and the total impact on integrated luminosity is 0.2%. Since the total luminosity in 2015 is substantially lower, no correction is applied. The PCC also has very small noise corrections.
5.3 Nonlinear response
In the absence of out-of-time pileup, the PCC luminosity is expected to be linear, according to simulations, so no corrections are applied. Moreover, the ratios of PCC to both DT and RAMSES luminosity measurements are highly compatible as a function of the instantaneous luminosity without any corrections. The HFOC response in 2015 and 2016, on the other hand, exhibits significant nonlinearity compared to the other luminometers. The main source of nonlinearity is the uncalibrated ADC-to-charge conversion applied at the time of data taking. Data from fill 5416, which exhibit a wide range of instantaneous luminosity, are used to model the correction for HFOC with a fourth-order polynomial. This smooth function extrapolates to the \(\sigma _{\mathrm {vis}}\) calibration at low pileup within uncertainty. This single model is used to correct the nonlinear behavior of HFOC (0–10% higher response when compared to PCC) throughout 2016.
As described in Sect. 3, nonlinearity corrections are also needed for PLT. The corrections are modeled with a first-order polynomial. The parameters are time dependent, because of changes in the PLT operating conditions during the course of 2015 and 2016. These corrections, amounting to −0.2 to \(+\)1.4/(Hz/\(\upmu \)b), are derived by comparing with RAMSES data in five different periods.
6 Detector stability and linearity
After the rate corrections are applied (as discussed in Sect. 5), comparisons between different luminometers are performed to assess remaining systematic effects impacting the luminosity measurement. Since PCC is expected to suffer the least from nonlinearities, as described in Sect. 2.1, once out-of-time pileup effects are corrected, PCC is the preferred luminometer for these data sets in the following estimates for stability and potential nonlinearity.
6.1 Upper bounds on stability
One measurement of potential instability in the PCC luminosity comes from intrinsic monitoring (i.e., comparing rates from different sections/parts of the subdetector over time). In a perfectly stable system, the fractional rates among different subcomponents would exhibit no variation with time. Figure 16 shows the result of applying out-of-time pileup corrections (as discussed in Sect. 5.1) separately to each pixel layer or disk. The sum of the per-region corrections matches the total, nominal correction made for all PCC regions to better than 0.1%. After the corrections are applied, the relative rates are quite stable over the course of 2015 and 2016. This is also shown in Fig. 16, where the relative PCC rates over time are simultaneously fit to a first-order polynomial.
However, this method cannot detect global shifts in \(\sigma _{\mathrm {vis}}\), and so it is crucial to make comparisons with completely independent systems. With multiple independent systems available for comparison, luminometers displaying brief periods of instability can be clearly identified. The cross-detector comparison is repeated for the entire data set for each year to detect periods where a single luminometer experiences transient effects (e.g., data quality issues, some detector components off, anomalous signals, etc.). Figure 17 shows the ratio of the luminosity measurements for different pairs of detectors throughout 2016, highlighting (in red) periods where the ratios significantly deviate from unity and so the associated data are invalidated.
After the exclusion of invalidated data, which amount to \(\lesssim \)5% for each luminometer, the remaining input from different luminometers is used to assess an upper limit on the stability of the luminosity. PCC measurements are valid for 98.3 (94.3)% of the data set in 2015 (2016). The rest of the luminosity is provided by the next most stable luminometer, which is RAMSES (HFOC) for 2015 (2016). The primary luminosity, which is PCC or luminosity from the next most stable detector when PCC is unavailable, is compared with the next-best available luminometer (secondary). In Fig. 18, the latter is selected using the lowest standard deviation in the ratio relative to PCC over fixed time intervals of approximately 20\(\text { min}\) each. The position of the mean shows the agreement between the luminometers on the integrated luminosity. The width reflects stability effects, as well as residual statistical uncertainty in the luminosity measurement in each interval. From the distribution over the course of each year, the width is an upper limit on the uncertainty due to time dependencies in the luminometers. For 2015 (2016) a systematic uncertainty due to detector stability of 0.6 (0.5)% is derived.
6.2 Time dependence of linearity
We make use of two methods for assessing the detector linearity. The primary method compares the ratio of the instantaneous luminosity from two luminometers per fill as a function of the instantaneous luminosity, which is estimated from the numerator. A first-order polynomial fit is performed and the slope is extracted. The slopes per fill are then studied as a function of time. No significant deviation over time is observed between DT/PCC or RAMSES/PCC and HFOC/PCC, DT/PCC, or RAMSES/PCC in 2015 and 2016, respectively.
To estimate the uncertainty, the fitted slopes are weighted according to the per-fill integrated luminosity. The mean values deviate slightly from 0, and the largest deviation is the systematic uncertainty in the linearity of PCC luminosity. Figure 19 shows the summary of these slopes for 2015 and 2016 at \(\sqrt{s}=13\,{\text {TeV}} \) both for the whole year, and for subsets of each data set with equal luminosity. The largest average slope is 0.26 (0.08)%/(\(\hbox {Hz}/\upmu \hbox {b}\)) in 2015 (2016), which translates into a 0.5 (0.3)% uncertainty in the integrated luminosity of the 2015 (2016) data set, where the average \(\mathcal {L} _{\mathrm {b}}\) is approximately 2.0 (3.3) \(\hbox {Hz}/\upmu \hbox {b}\).
The alternative method makes use of the entire data set throughout the year, and extracts a single relative slope with a first-order polynomial fit. To remove effects from variations in the absolute luminosity scale over time, the per-fill ratios are shifted such that their extrapolation at zero luminosity is unity. The results are consistent with the primary method described above.
7 Total luminosity correction and uncertainty
For each data set, final rate corrections and final calibrations are applied to data in small time windows of \(2^{18}\) LHC orbits, approximately 23 s. All the measurements are summed to derive a total integrated luminosity measurement. The contributions to the systematic uncertainty in the integrated luminosity are divided into two general categories:
-
“normalization” uncertainty in the absolute luminosity scale, \(\sigma _{\mathrm {vis}}\), determined from the vdM scan procedure
-
“integration” uncertainty associated with \(\sigma _{\mathrm {vis}}\) variations over time (stability) and pileup (linearity and out-of-time pileup corrections).
The magnitudes of the corrections applied to the absolute normalization from the vdM calibration are listed in Table 3, and Table 4 summarizes the sources of uncertainty. The dominant sources of normalization uncertainty are associated with the beam position monitoring (as discussed in Sect. 4.3.2), transverse factorizability (as explained in Sect. 4.4), and beam-beam effects (as described in Sect. 4.3.3).
The dominant sources of integration uncertainty arise from the linearity and stability of the primary relative to secondary luminosity measurements over the course of each year (as discussed in Sect. 6). In addition, the subleading systematic uncertainty due to out-of-time pileup corrections is considered for the PCC method since it is primarily used for the luminosity estimate.
Several sources of normalization uncertainty are considered to be correlated for the years studied because the scan procedures and analysis methodology are identical between the two vdM calibrations. The sources of the normalization uncertainty that are not correlated between the two vdM programs, and are partly statistical in nature, are the orbit drift, along with the scan-to-scan and bunch-to-bunch variations in the measured \(\sigma _{\mathrm {vis}}\). The latter are collectively referred to as “other variations in \(\sigma _{\mathrm {vis}}\) ” in Table 4.
Among the sources of integration uncertainty, the afterglow corrections are treated identically in the two data sets, and so this source of systematic uncertainty is correlated. The estimate of the uncertainty due to linearity is considered to be correlated, since it is derived from the PCC linearity in both years. On the other hand, the stability assessment is based on cross-detector comparisons. Although PCC is the primary luminometer in each data set, the secondary luminometer is different for each year. Since the source of instability cannot be assessed and contains time-dependent features, the uncertainty is not correlated.
The tool used for providing luminosity values to physics analyses applies the corrections to the raw luminosity values using the average per-bunch luminosity, rather than the individual bunch-by-bunch values. This potentially introduces an error in the case where these corrections include a nonlinear term and the bunch-by-bunch luminosity varies significantly among bunches. We evaluated the effect of this approximation on 2016 data, and found that the overall impact on the integrated luminosity was <0.1%.
Finally, the quantity measured by the luminometers is the luminosity delivered to CMS; however, the quantity of interest to most physics analyses is the luminosity corresponding to the data actually recorded by the CMS DAQ system. These are related by the deadtime, as obtained from the trigger and clock system of CMS [21]. In 2015 this measurement was affected by an algorithm issue in the trigger system and has an uncertainty of 0.5%, but this problem was resolved before data taking began in 2016, so in 2016 the impact is negligible (<0.1%) and uncorrelated with 2015.
When applying the vdM calibration to the entire periods, the total integrated luminosity is 2.27 \(\,\text {fb}^{-1}\) with a relative precision of 1.6% in 2015, and 36.3 \(\,\text {fb}^{-1}\) with a relative precision of 1.2% in 2016. The combined \(2015+2016\) luminosity measurement has a precision of 1.2%, which is the same as the 2016 precision since it is the significantly larger data set and the precision in 2015 is similar.
8 Summary
The luminosity calibration using beam-separation (van der Meer, vdM) scans has been presented for data from proton–proton collisions recorded by the CMS experiment in 2015 and 2016 when all subdetectors were fully operational. The main sources of systematic uncertainty are related to residual differences between the measured beam positions and the ones provided by the operational settings of the LHC magnets, the factorizability of the transverse spatial distributions of proton bunches, and the modeling of effects on the proton distributions due to electromagnetic interactions among protons in the colliding bunches. When applying the vdM calibration to the entire data-taking period, the relative stability and linearity of luminosity subdetectors (luminometers) are included in the uncertainty in the integrated luminosity measurement as well.
The resulting relative precision in the calibration from the vdM scans is 1.3 (1.0)% in 2015 (2016) at \(\sqrt{s}=13\,{\text {TeV}} \); the integration uncertainty due to luminometer-specific effects contributes 1.0 (0.7)%, resulting in a total uncertainty of 1.6 (1.2)%; when applying the vdM calibration to the entire periods, the total integrated luminosity is 2.27 (36.3)\(\,\text {fb}^{-1}\).
The final precision is among the best achieved at bunched-beam hadron colliders. Advanced techniques are used to estimate and correct for the bias associated with the beam position monitoring at the scale of \(\upmu \hbox {m}\), the factorizability of the transverse beam distribution, and beam-beam effects. In addition, detailed luminometer rate corrections and the inclusion of novel measurements (such as the data from the Radiation Monitoring System for the Environment and Safety) lead to precise estimates of the stability and linearity over time.
In the coming years, a similarly precise calibration of the real-time luminosity delivered to the LHC will become increasingly important for standard operations. Under those conditions, the impact of out-of-time pileup effects is expected to be larger, but in principle they can be mitigated using techniques described in this paper.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors’ comment: Release and preservation of data used by the CMS Collaboration as the basis for publications is guided by the CMS policy as written in its document “CMS data preservation, re-use and open access policy” (https://cms-docdb.cern.ch/cgi-bin/PublicDocDB/RetrieveFile?docid=6032&filename=CMSDataPolicyV1.2.pdf&version=2).]
References
P. Grafström, W. Kozanecki, Luminosity determination at proton colliders. Prog. Part. Nucl. Phys. 81, 97 (2015). https://doi.org/10.1016/j.ppnp.2014.11.002
ATLAS and CMS Collaborations, Addendum to the report on the physics at the HL-LHC, and perspectives for the HE-LHC: collection of notes from ATLAS and CMS. CERN (2019). https://doi.org/10.23731/CYRM-2019-007.Addendum. arXiv:1902.10229
CMS Collaboration, The CMS experiment at the CERN LHC. JINST 3, S08004 (2008). https://doi.org/10.1088/1748-0221/3/08/S08004
CMS Collaboration, Measurement of the differential cross sections for the associated production of a W boson and jets in proton–proton collisions at \(\sqrt{s}=13\) TeV. Phys. Rev. D 96, 072005 (2017). https://doi.org/10.1103/PhysRevD.96.072005. arXiv:1707.05979
CMS Collaboration, Measurement of the differential Drell–Yan cross section in proton–proton collisions at \( \sqrt{s} = 13\) TeV. JHEP 12, 059 (2019). https://doi.org/10.1007/JHEP12(2019)059. arXiv:1812.10529
CMS Collaboration, Measurement of the \({\rm t}\bar{{\rm t}}\) production cross section using events with one lepton and at least one jet in pp collisions at \(\sqrt{s}=13\) TeV. JHEP 09, 051 (2017). https://doi.org/10.1007/JHEP09(2017)051. arXiv:1701.06228
CMS Collaboration, Measurement of the \({\rm t}\bar{{\rm t}}\) production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at \(\sqrt{s}=13\) TeV. Eur. Phys. J. C 79, 368 (2019). https://doi.org/10.1140/epjc/s10052-019-6863-8. arXiv:1812.10505
M. Hostettler et al., Luminosity scans for beam diagnostics. Phys. Rev. Accel. Beams 21, 102801 (2018). https://doi.org/10.1103/PhysRevAccelBeams.21.102801. arXiv:1804.10099
F. Antoniou et al., Can we predict luminosity?, in Proceedings of the 7th Evian Workshop on LHC Beam Operation, p. 125 (2016). http://cds.cern.ch/record/2293678
S. van der Meer, Calibration of the effective beam height in the ISR. ISR Report CERN-ISR-PO-68-31 (1968). https://cds.cern.ch/record/296752
C. Rubbia, Measurement of the luminosity of \({\rm p}\bar{{\rm p}}\) collider with a (generalized) van der Meer method. CERN \({\rm p}\bar{{\rm p}}\) Note 38 (1977). https://cds.cern.ch/record/1025746
LHCb Collaboration, Precision luminosity measurements at LHCb. JINST 9, P12005 (2014). https://doi.org/10.1088/1748-0221/9/12/P12005. arXiv:1410.0149
ATLAS Collaboration, Luminosity determination in pp collisions at \(\sqrt{s}= 8\) TeV using the ATLAS detector at the LHC. Eur. Phys. J. C 76, 653 (2016). https://doi.org/10.1140/epjc/s10052-016-4466-1. arXiv:1608.03953
L. Evans, P. Bryant, LHC machine. JINST 3, S08001 (2008). https://doi.org/10.1088/1748-0221/3/08/S08001
B. Salvachua, Overview of proton–proton physics during Run 2, in Proceedings of the 9th Evian Workshop on LHC Beam Operation (2019), p. 7. http://cds.cern.ch/record/2750272
J. Boyd, C. Schwick, Experiments—experience and future, in Proceedings of the 7th Evian Workshop on LHC Beam Operation (2017), p. 229. https://cds.cern.ch/record/2293538
CMS Collaboration, Measurement of the double-differential inclusive jet cross section in proton–proton collisions at \(\sqrt{s}=13\) TeV. Eur. Phys. J. C 76, 451 (2016). https://doi.org/10.1140/epjc/s10052-016-4286-3. arXiv:1605.04436
P. Skands, S. Carrazza, J. Rojo, Tuning PYTHIA 8.1: the Monash 2013 tune. Eur. Phys. J. C 74, 3024 (2014). https://doi.org/10.1140/epjc/s10052-014-3024-y. arXiv:1404.5630
T. Sjöstrand et al., An introduction to PYTHIA 8.2. Comput. Phys. Commun. 191, 159 (2015). https://doi.org/10.1016/j.cpc.2015.01.024. arXiv:1410.3012
CMS Collaboration, Performance of the CMS muon detector and muon reconstruction with proton–proton collisions at \(\sqrt{s}=13\) TeV. JINST 13, P06015 (2018). https://doi.org/10.1088/1748-0221/13/06/p06015. arXiv:1804.04528
CMS Collaboration, The CMS trigger system. JINST 12, P01020 (2017). https://doi.org/10.1088/1748-0221/12/01/P01020. arXiv:1609.02366
A.J. Bell, Beam and radiation monitoring for CMS, in Proceedings, 2008 IEEE Nuclear Science Symposium (2008), p. 2322. https://doi.org/10.1109/NSSMIC.2008.4774822
P. Lujan, Performance of the Pixel Luminosity Telescope for luminosity measurement at CMS during Run 2. PoS 314, 504 (2017). https://doi.org/10.22323/1.314.0504
M. Hempel, Development of a novel diamond based detector for machine induced background and luminosity measurements. PhD thesis, Brandenburg University of Technology Cottbus-Senftenberg, 2017. DESY-THESIS-2017-030. https://doi.org/10.3204/PUBDB-2017-06875
CMS Collaboration, CMS technical design report for the level-1 trigger upgrade. CMS Technical Design Report CERN-LHCC-2013-011, CMS-TDR-012 (2013). https://cds.cern.ch/record/1556311
CMS Collaboration, Event generator tunes obtained from underlying event and multiparton scattering measurements. Eur. Phys. J. C 76, 155 (2016). https://doi.org/10.1140/epjc/s10052-016-3988-x. arXiv:1512.00815
Geant4 Collaboration, Geant4—a simulation toolkit. Nucl. Instrum. Methods A 506, 250 (2003). https://doi.org/10.1016/S0168-9002(03)01368-8
W. Verkerke, D.P. Kirkby, The RooFit toolkit for data modeling, in Proceedings of the 13th International Conference for Computing in High-Energy and Nuclear Physics (CHEP03) (2003), p. MOLT007. arXiv:physics/0306116. http://inspirehep.net/literature/621398
CMS Collaboration, Description and performance of track and primary-vertex reconstruction with the CMS tracker. JINST 9, P10009 (2014). https://doi.org/10.1088/1748-0221/9/10/P10009. arXiv:1405.6569
G. Segura Millan, D. Perrin, L. Scibile, RAMSES: the LHC radiation monitoring system for the environment and safety, in Proceedings of the 10th International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS 2005) (2005). TH3B.1-3O. https://accelconf.web.cern.ch/ica05/proceedings/pdf/O4_011.pdf
A. Ledeul et al., CERN supervision, control, and data acquisition system for radiation and environmental protection, in Proceedings of the 12th International Workshop on Emerging Technologies and Scientific Facilities Controls (PCaPAC’18) (2019), p. 248 (FRCC3). https://doi.org/10.18429/JACoW-PCaPAC2018-FRCC3
J. Salfeld-Nebgen, D. Marlow, Data-driven precision luminosity measurements with Z bosons at the LHC and HL-LHC. JINST 13, P12016 (2018). https://doi.org/10.1088/1748-0221/13/12/P12016. arXiv:1806.02184
O.S. Brüning et al., LHC design report vol. 1: the LHC main ring. CERN (2004). https://doi.org/10.5170/CERN-2004-003-V-1
M. Hostettler, LHC luminosity performance. PhD thesis, University of Bern. CERN-THESIS-2018-051 (2018). https://cds.cern.ch/record/2319396
C. Barschel et al., Results of the LHC DCCT calibration studies. Technical Report CERN-ATS-Note-2012-026 PERF (2012). https://cds.cern.ch/record/1425904
D. Belohrad et al., The LHC fast BCT system: a comparison of design parameters with initial performance. Technical Report CERN-BE-2010-010 (2010). https://cds.cern.ch/record/1267400
A. Jeff et al., Longitudinal density monitor for the LHC. Phys. Rev. ST Accel. Beams 15, 032803 (2012). https://doi.org/10.1103/PhysRevSTAB.15.032803
A. Jeff, A longitudinal density monitor for the LHC. PhD thesis, University of Liverpool. CERN-THESIS-2012-240 (2012). https://cds.cern.ch/record/1513180
G. Papotti, T. Bohl, F. Follin, U. Wehrle, Longitudinal beam measurements at the LHC: the LHC Beam Quality Monitor, in Proceedings, 2nd International Particle Accelerator Conference (IPAC 2011) (2011), p. 1852 (TUPZ022). https://cds.cern.ch/record/1399087
M. Gasior, J. Olexa, R. Steinhagen, BPM electronics based on compensated diode detectors—results from development systems, in Proceedings of the 2012 Beam Instrumentation Workshop (BIW’12). (2012), p. 44 (MOPG010). https://cds.cern.ch/record/1476070
T. Persson et al., LHC optics corrections in Run 2, in Proceedings, 9th Evian Workshop on LHC Beam Operation. (2019), p. 59. https://cds.cern.ch/record/2750277
M. Gasior, G. Baud, J. Olexa, G. Valentino, First operational experience with the LHC Diode ORbit and OScillation (DOROS) system, in Proceedings of the 5th International Beam Instrumentation Conference (IBIC 2016) (2017), p. 43 (MOPG07). https://doi.org/10.18429/JACoW-IBIC2016-MOPG07
H. Bartosik, G. Rumolo, Production of the single bunch for van der Meer scans in the LHC injector chain. CERN Accelerator Note CERN-ACC-NOTE-2013-0008 (2013). https://cds.cern.ch/record/1590405
V. Balagura, Van der Meer scan luminosity measurement and beam–beam correction. Eur. Phys. J. C 81, 26 (2021). https://doi.org/10.1140/epjc/s10052-021-08837-y. arXiv:2012.07752
A. Babaev et al., Impact of beam-beam effects on absolute luminosity calibrations at the CERN Large Hadron Collider. Phys. Rev. ST Accel. Beams (2021) (to be submitted)
C. Barschel, Precision luminosity measurement at LHCb with beam–gas imaging. PhD thesis, RWTH Aachen University (2014). CERN-THESIS-2013-301. https://cds.cern.ch/record/1693671
G. Coombs, M. Ferro-Luzzi, R. Matev, Beam–gas imaging measurements at LHCb, in Proceedings, 7th International Beam Instrumentation Conference (IBIC 2018) (2019), p. 459 (WEPB13). https://doi.org/10.18429/JACoW-IBIC2018-WEPB13
W. Kozanecki, T. Pieloni, J. Wenninger, Observation of beam–beam deflections with LHC orbit data. CERN Accelerator Note CERN-ACC-NOTE-2013-0006 (2013). https://cds.cern.ch/record/1581723
S.M. White, Determination of the absolute luminosity at the LHC. PhD thesis, Université Paris-Sud 11. CERN-THESIS-2010-139, LAL-10-154 (2010). https://cdsweb.cern.ch/record/1308187
M. Klute, C. Medlock, J. Salfeld-Nebgen, Beam imaging and luminosity calibration. JINST 12, P03018 (2017). https://doi.org/10.1088/1748-0221/12/03/P03018. arXiv:1603.03566
J. Knolle, Measuring luminosity and the \({\rm t}{\bar{\rm t}}{\rm Z}\) production cross section with the CMS experiment. PhD thesis, Universität Hamburg, 2020. CERN-THESIS-2020-185, DESY-THESIS-2020-020. https://doi.org/10.3204/PUBDB-2020-03187
Acknowledgements
Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank all contributors to the LHC Luminosity Calibration and Monitoring Working Group and, in particular, its co-coordinator Witold Kozanecki for the discussions on beam instrumentation and beam physics. We also thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing c enters and personnel of the Worldwide LHC Computing Grid and other centers for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC, the CMS detector, and the supporting computing infrastructure provided by the following funding agencies: the Austrian Federal Ministry of Education, Science and Research and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Ministerio de Ciencia Tecnología e Innovación (MINCIENCIAS), Colombia; the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research and Innovation Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via PRG780, PRG803 and PRG445 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucléaire et de Physique des Particules/CNRS, and Commissariat à l’Énergie Atomique et aux Énergies Alternatives/CEA, France; the Bundesministerium für Bildung und Forschung, the Deutsche Forschungsgemeinschaft (DFG), under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306, and under project number 400140256 - GRK2497, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Research, Development and Innovation Fund, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Ministry of Education and Science of the Republic of Latvia; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Ministry of Science of Montenegro; the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Center, Poland; the Fundação para a Ciência e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, the Russian Foundation for Basic Research, and the National Research Center “Kurchatov Institute”; the Ministry of Education, Science and Technological Development of Serbia; the Secretaría de Estado de Investigación, Desarrollo e Innovación, Programa Consolider-Ingenio 2010, Plan Estatal de Investigación Científica y Técnica y de Innovación 2017–2020, research project IDI-2018-000174 del Principado de Asturias, and Fondo Europeo de Desarrollo Regional, Spain; the Ministry of Science, Technology and Research, Sri Lanka; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract nos. 675440, 724704, 752730, 765710, and 824093 (European Union); the Leventis Foundation; the Alfred P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the “Excellence of Science – EOS” – be.h project n. 30820817; the Beijing Municipal Science and Technology Commission, no. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lendület (“Momentum”) Program and the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program ÚNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Scientific and Industrial Research, India; the National Science Center (Poland), contracts Opus 2014/15/B/ST2/03998 and 2015/19/B/ST2/02861; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Higher Education, project no. 0723-2020-0041 (Russia); the Programa de Excelencia María de Maeztu, and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF, and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University, and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
Author information
Authors and Affiliations
Consortia
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funded by SCOAP3
About this article
Cite this article
Sirunyan, A.M., Tumasyan, A., Adam, W. et al. Precision luminosity measurement in proton–proton collisions at \(\sqrt{s} = 13\,\hbox {TeV}\) in 2015 and 2016 at CMS. Eur. Phys. J. C 81, 800 (2021). https://doi.org/10.1140/epjc/s10052-021-09538-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjc/s10052-021-09538-2