1 Introduction

The ongoing LHC runs will define the research activities in collider phenomenology in the coming years. Currently, the LHC is in its Run-3 phase with an anticipated upgrade into the high-luminosity (HL) phase in 2029. The data from Run 2 (2016–2018) corresponds to an integrated luminosity of about 140\(\,{\textrm{fb}}^{-1}\), which will increase to an expected 300\(\,{\textrm{fb}}^{-1}\) at the end of Run 3 for each multi-purpose experiment ATLAS and CMS. The HL-LHC will result in an accelerated data acquisition, where both experiments will accrue over 3000\(\,{\textrm{fb}}^{-1}\) of data until the end of the LHC’s lifetime. While it is impressive to increase the existing data tenfold in a relatively short amount of time, it is generally believed that the sensitivity \(\mathcal {S}\) in searches for resonances and the underlying dynamics of a theory that extends the standard model (SM) scales as the square root of the integrated luminosity \(\sqrt{\mathcal {L}} \), i.e.

$$\begin{aligned} \mathcal {S}\simeq \frac{S}{\sqrt{B}} \simeq \sqrt{\mathcal {L}} \frac{\sigma _S}{ \sqrt{\sigma _B}}, \end{aligned}$$
(1.1)

assuming LHC upgrades do not change the centre-of-mass energy \(\sqrt{s}\), and therefore the cross sections of signal, \(\sigma _S\), or background, \(\sigma _B\), stay unchanged. Similarly, for SM measurements with a negligible amount of background events B compared to signal events S, the statistical uncertainty \(\delta \) is obtained from the variance of a Poisson distribution and hence scales as \(\delta \sim \sqrt{S} \sim \sqrt{\mathcal {L}} \). Despite a considerable increase in the LHC’s luminosity, this implies only a modest sensitivity increase in searches for new physics and in measuring SM parameters. More complex is the prediction of the evolution of systematic uncertainties, originating from different sources that constitute the leading uncertainties in some cases. On this basis of a simple rescaling of the number of signal and background events with the integrated luminosity, several analyses have assessed the LHC physics potential [1] for the HL-LHC. In these analyses, the systematic uncertainties have been estimated assuming different scenarios, reaching from unchanged uncertainties with respect to existing analyses to reducing them by either a factor of two or by \(\sqrt{\mathcal {L}}\) scaling. The HL-LHC projections contribute to a relatively gloomy outlook for the collider physics programme of the coming decades. Some of these projections and their updates continue to inform the future strategy of particle physics [2, 3].

However, we argue that the anticipated \(\sqrt{\mathcal {L}}\) scaling is too conservative for many relevant searches for effects beyond the SM (BSM) and SM measurements. While it is correct that the cross sections for signal and background remain essentially unchanged during the HL-LHC runs, new observables will become accessible, which will lead to a much more significant gain in sensitivity than widely projected [4,5,6]. Concretely, entirely new phase space regions, e.g. high transverse momentum or large invariant-mass final states, will be populated by an appreciable number of events, opening them up for tailored search and measurement strategies. Thus, novel reconstruction techniques designed for these exclusive phase space regions will significantly boost the exploitation of kinematic differences between signal and background, enhancing sensitivity in new physics searches. In addition, in any search, the experimentally measured data are tensioned with a model assumption, i.e. often a high-dimensional extension of the SM; accessing such new observables will help to overconstrain the parameter space of such models, thereby increasing the sensitivity far beyond the estimated \(\sqrt{\mathcal {L}}\) scaling. As the amount of data increases, new reconstruction techniques and calibrations become available, not only reducing the systematic uncertainties estimated in previous analyses but also making new measurement strategies possible. Furthermore, once sufficiently large datasets become available for constructing multi-dimensional measurement regions, the measured data can be used to constrain model parameters thus reducing modelling uncertainties.

We showcase these observations explicitly in four example studies for representative scenarios and processes. To begin, in Sect. 2, we focus on top quark physics. Firstly, in Sect. 2.1, we consider the production of four top quarks. The rich final state allows to expand the analysis to more decay channels when more data become available. The extension to channels that have not been considered in earlier versions is possible because of potent algorithms for background reduction, complemented by sideband regions that tightly constrain background yields because of better statistical precision. This showcases the prowess of novel machine learning (ML) reconstruction techniques and model constraints from sideband regions. Secondly, we turn from measuring the cross section of four top quark production to measuring one of the most relevant fundamental parameters for electroweak physics, the top quark mass. In Sect. 2.2, we use the example of fully hadronic top quark decays merged into a single jet to demonstrate how novel reconstruction algorithms and improved calibration methods greatly enhance the potential of this measurement. The measurement utilises the benefit of exclusive high-transverse momentum (\(p_\textrm{T}\)) phase space regions over inclusive measurements to extract additional information. We then turn towards electroweak physics in Sect. 3. Rare processes in the SM only become accessible at large integrated luminosities. We provide a phenomenological analysis for \(tW\!Z\) and \(t\overline{t}Z\) production, demonstrating how additional background reduction techniques can greatly enhance the potential of measuring the \(tW\!Z\) process over the known \(t\overline{t}Z\) background. In addition, we show how a differential measurement can enhance the sensitivity to new physics interactions compared to an inclusive measurement. Finally, we show that \(\sqrt{\mathcal {L}}\) scaling is too conservative for Higgs phenomenology, too. In a fourth example, in Sect. 4, we turn our attention to the leading Higgs boson production processes at the LHC. A comprehensive set of differential measurements can lift blind directions in the ample parameter space of new physics, encoded as Wilson coefficients of an effective field theory, breaking evidently the \(\sqrt{\mathcal {L}}\) scaling.

This representative set of examples highlights that the \(\sqrt{\mathcal {L}}\) scaling for the HL-LHC provides a too pessimistic outlook for the coming decades of collider physics. Instead, the focus on exclusive phase space regions, designated reconstruction techniques, simultaneous access to multi-dimensional measurement regions, and advanced background reduction algorithms will enable us to excel in our ability to gain a deeper understanding of the underlying dynamics of particle physics.

2 Top physics

The HL-LHC is an ideal machine to study the properties of the top quark. With an integrated production cross-section of about 850 and 950\(\,\textrm{pb}\) at \(\sqrt{s}=13\) and 14\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\), respectively [7, 8], both ATLAS and CMS will each have produced between 2.5\(-\)3.0 billion top-quark pair events by the end the HL-LHC’s runtime. Because of its short lifetime with a mass close to the electroweak scale, and its characteristic decay into three fermions enabling the separation of top quark events from large QCD backgrounds, top quark studies at the HL-LHC will yield important insights into the top quark’s role in the mechanism of electroweak-symmetry breaking. Thus, exhausting the HL-LHC’s potential in determining all top quark properties is of fundamental importance for its success.

Fig. 1
figure 1

Evolution of the expected significance in four top quark analyses by the CMS experiment. We compare the 2018 result based on an integrated luminosity 35.9\(\,\textrm{fb}^{-1}\)  [9] (red filled circle), with a projection from 2018 [10] (blue area), with the expected significance from the 2023 observation based on an integrated luminosity 138\(\,\textrm{fb}^{-1}\)  [11] (red cross). The CMS projection based on statistical uncertainties only is also shown (black solid line). The projection starts at an integrated luminosity of 78\(\,\textrm{fb}^{-1}\); the dashed grey lines extrapolate it with a \(\sqrt{\mathcal {L}}\) dependence down to 36\(\,\textrm{fb}^{-1}\). For illustration, the function \(0.25 \sqrt{\mathcal {L/\!\,\textrm{fb}}}\), is also shown (red dotted line), which describes well the expected significance obtained from the statistical-only uncertainty

2.1 Multi-top production

Four top quark final states (\(t\overline{t}t\overline{t}\)) are known to be sensitive to a plethora of resonant and non-resonant BSM interactions [12,13,14,15,16,17]. The production of \(t\overline{t}t\overline{t}\) is therefore expected to provide high sensitivity to BSM physics when the SM prediction is accurately known. The \(t\overline{t}t\overline{t}\) production has a very small cross section of \(13.4^{+1.0}_{-1.8}\,\textrm{fb} \) [18] at 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\), with an overwhelming amount of SM backgrounds, making the search for \(t\overline{t}t\overline{t}\) experimentally very challenging. However, searches for \(t\overline{t}t\overline{t}\) final states also demonstrate the ability of analyses to break the \(\sqrt{\mathcal {L}}\) scaling already. As more data become available, increasingly exclusive selections can be achieved to combat contributing backgrounds without compromising the robustness of predictions. In Fig. 1, we compare the 2018 CMS expected sensitivity [9] and its extrapolation to the HL-LHC [10] with the recent expected sensitivity from the 2023 observation of \(t\overline{t}t\overline{t}\) production [11]. The 2018 analysis uses same-sign dilepton and multilepton final states, and observes \(t\overline{t}t\overline{t}\) production with an expected sensitivity of one standard deviation (\(\sigma \)) above the SM backgrounds, using data corresponding to 35.9\(\,{\textrm{fb}}^{-1}\) of integrated luminosity [9], shown as a filled red circle. When extrapolating this result to the HL-LHC, the cross section of \(t\overline{t}t\overline{t}\) production increases by a factor of about 1.3 when increasing \(\sqrt{s}\) from 13 to 14\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\). This increase, together with an expected increase of the integrated luminosity to 78\(\,{\textrm{fb}}^{-1}\), led to a predicted sensitivity of about \(2\sigma \) above the SM background in the projection, obtained by using \(\sqrt{\mathcal {L}}\) scaling [10]. The projection of the expected \(t\overline{t}t\overline{t}\) significance for the HL-LHC is performed using two different scenarios, namely statistical uncertainties only (solid line), and three different assumptions on the systematic uncertainties (blue band). When including systematic uncertainties, the most optimistic scenario leads to an expected significance of \(4.1\sigma \) with 3\(\,{\textrm{ab}}^{-1}\), where the improvement from 300\(\,{\textrm{fb}}^{-1}\) to the HL-LHC is only about one standard deviation. Only in the unrealistic case of statistical uncertainties only, the projection results in an observation with a significance above \(5\sigma \). We argue that this pessimistic scenario is misleading because it does not consider methodical and technical improvements that can improve the sensitivity far beyond a simple reduction of systematic uncertainties in an existing analysis. This is demonstrated by a recent result by the CMS Collaboration, observing \(t\overline{t}t\overline{t}\) production with an expected significance of \(4.9\sigma \) using 138\(\,{\textrm{fb}}^{-1}\) of 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) data, shown by the star in Fig. 1, and an observed significance of \(5.6\sigma \) [11], which is much better than anticipated from the sensitivity estimates. The analysis from 2023 updates the previous analyses in this channel [9, 19] with a significantly improved lepton identification, especially at low transverse momenta. This is achieved by using Boosted Decision Trees (BDTs) to discriminate between leptons produced in the decay of charm and bottom hadrons from those produced in the decay of \(W\) bosons. In addition, BDTs are employed to discriminate between \(t\overline{t}t\overline{t}\) production and the large SM backgrounds. The analysis leverages a set of critical variables, including jet multiplicity, jet properties, and the number of jets identified to originate from \(b\) quarks (\(b\) jets), supplemented by associated kinematic variables. These multivariate analysis techniques have proven to be pivotal in isolating and studying the rare \(t\overline{t}t\overline{t}\) production, achieving a significance much better than the \(\sqrt{\mathcal {L}}\)-predicted significance of \(2.7\sigma \) at 138\(\,{\textrm{fb}}^{-1}\), even though this prediction is based on statistical uncertainties only, for a \(t\overline{t}t\overline{t}\) cross section 1.3 times as large.Footnote 1

A very similar analysis can be made for the search of \(t\overline{t}t\overline{t}\) production with the ATLAS experiment. An ATLAS search for \(t\overline{t}t\overline{t}\) final states with 36.1\(\,{\textrm{fb}}^{-1}\) of 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) data resulted in an expected upper limit on the production cross section of 29\(\,\textrm{fb}\) at 95% confidence level [20]. The \(\sqrt{\mathcal {L}}\) scaling of this result predicted a significance of about \(5\sigma \) with 300\(\,{\textrm{fb}}^{-1}\) of integrated luminosity at \(\sqrt{s}=14\,\textrm{Te}\hspace{-1.00006pt}\textrm{V} \) [21]. The most recent ATLAS result in this channel achieves an expected significance of \(4.3\sigma \) already with 140\(\,{\textrm{fb}}^{-1}\) of 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) data [22], overcoming the \(\sqrt{\mathcal {L}}\)-scaling expectation. This publication reported the first observation of \(t\overline{t}t\overline{t}\) production with an observed significance of \(6.1\sigma \).

In addition to the same-sign dilepton and multilepton final states, there are other channels that can be considered in the search for \(t\overline{t}t\overline{t}\) production. A combination of all-hadronic, one lepton and opposite-sign dilepton events improves the expected sensitivity from \(2.7\sigma \) as obtained in Ref. [19], to \(3.2\sigma \) [23]. This example of \(t\overline{t}t\overline{t}\) production is educative for many other processes at the LHC, where the availability of more data opens up channels that are not accessible in the first stage of data analyses and allows for a refined statistical treatment that improves the sensitivity far beyond early expectations.

2.2 Exclusive top quark mass measurements

Measuring the top-quark mass \(m_{t}\) to high accuracy is a challenge at the LHC. While top quark-pair final states can be separated fairly straightforwardly from QCD and electroweak backgrounds, measuring \(m_{t}\) hinges on controlling the jet-energy resolution, pileup effects from overlapping \(pp\)-collision events, and the jet-energy scale specific to jets containing bottom quarks. Ongoing efforts to determine the top mass rely on fully leptonic [24,25,26], semi-leptonic [27, 28] and fully hadronic [29] top-quark decays. Traditionally, precision measurements of \(m_{t}\) use \(p_\textrm{T}\)-thresholds for the lepton, missing transverse momentum and jets as low as possible, for minimal statistical uncertainties due to the cross section-enhanced production of top quarks with small transverse momentum. However, boosted top quark final states [30,31,32] provide several advantages in ameliorating experimental challenges in reconstructing hadronically decaying top quarks [33], even to a degree where these final states can improve on the sensitivity in the cleaner final states with leptonic top decays [34, 35]. Concretely, one trades the smaller cross section of a boosted final state against the larger cross section of a fully hadronic top decay compared to a leptonic top decay, while benefitting from the reconstruction advantage by finding all the hadronic decay products in a relatively small confined area of a detector, i.e. inside a large jet. A measurement of \(m_{t}\) in this boosted topology is affected by complementary systematic uncertainties to measurements from low-\(p_\textrm{T}\) final states. In addition, it allows to probe \(m_{t}\) at energy scales much higher than previously reached and can help to resolve ambiguities in relating \(m_{t}\) measurements to its expression in well-defined theoretical schemes [36,37,38,39].

Fig. 2
figure 2

Projected evolution of the total (blue solid line) and statistical (red dashed line) uncertainties when measuring \(m_{t}\) from the jet mass in hadronic decays of boosted top quarks by the CMS Collaboration. The projected uncertainties are obtained using \(\sqrt{\mathcal {L}}\) scaling of the 8\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) measurement from 2017 [40], for an effective integrated luminosity of 3.4\(\,\textrm{fb}^{-1}\) corresponding to an equivalent measurement at 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) (see main text for details). The projected uncertainties are compared to CMS measurements from 2020 [41] and 2023 [42], using 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) data corresponding to integrated luminosities of 35.9 and 138\(\,\textrm{fb}^{-1}\), respectively

The first determination of \(m_{t}\) from the measured cross section as a function of the jet mass was performed by the CMS Collaboration in 2017 with 8\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) \(pp\) data corresponding to an integrated luminosity of 19.7\(\,\textrm{fb}^{-1}\)  [40]. The hadronic top quark decays were reconstructed using a single large-radius jet with \(p_\textrm{T} >400\,\textrm{Ge}\hspace{-1.00006pt}\textrm{V} \). Then, the differential top quark pair production cross section was unfolded as a function of the jet mass to the particle level, which was used to extract \(m_{t}\). We use this measurement to predict the evolution of statistical and systematic uncertainties with \(\sqrt{\mathcal {L}}\) scaling. This measurement has been performed at \(\sqrt{s}=8\,\textrm{Te}\hspace{-1.00006pt}\textrm{V} \), but we will compare it to measurements based on 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) data. To obtain comparable sensitivities, we scale the integrated luminosity of the 8\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) measurement such that the predicted number of events is the same as for a measurement at 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\). This is achieved by taking the cross-section ratio between 8 and 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) for the phase space of this measurement, resulting in an effective integrated luminosity of 3.4\(\,\textrm{fb}^{-1}\) at \(\sqrt{s}=13\,\textrm{Te}\hspace{-1.00006pt}\textrm{V} \). In other words, we expect a measurement at 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) with an integrated luminosity of 3.4\(\,\textrm{fb}^{-1}\) to have the same statistical and systematic uncertainties as the 8\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) measurement. For the prediction of the sensitivity, we scale not only the statistical but also the systematic uncertainty with \(\sqrt{\mathcal {L}}\), which is arguably too optimistic, and leads to a decrease of the total uncertainty proportional to \(\sqrt{\mathcal {L}}\) as a function of the integrated luminosity, as shown by the blue region in Fig. 2. The next measurement of \(m_{t}\) using the jet mass was published in 2020 on 13\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) data with an integrated luminosity of 35.9\(\,{\textrm{fb}}^{-1}\)  [41], shown as a second bar in Fig. 2. The statistical uncertainty of 0.4\(\,\textrm{Ge}\hspace{-1.00006pt}\textrm{V}\) in \(m_{t}\) is much smaller than the projected 1.8\(\,\textrm{Ge}\hspace{-1.00006pt}\textrm{V}\) obtained from the \(\sqrt{\mathcal {L}}\) scaling. This has been achieved by an improved jet reconstruction using the XCone algorithm [43] with a two-step clustering [44]. This improved the width of the lineshape in the jet mass distribution as well as the experimental resolution, leading to a much larger event count in the peak region and therefore a reduced statistical uncertainty. Even the total uncertainty is smaller than the optimistic \(\sqrt{\mathcal {L}}\) projection, because of a more precise calibration of the jet mass and a largely reduced susceptibility to pileup. The most recent measurement by the CMS Collaboration was published in 2023, using the full Run-2 dataset corresponding to an integrated luminosity of 138\(\,\textrm{fb}^{-1}\)  [42]. For this measurement, the jet mass scale was calibrated using the hadronic W boson decay within the large-radius jet, and uncertainties in the modelling of the final state radiation were reduced with the help of an auxiliary measurement of angular correlations in the jet substructure. This approach led to a significant increase in precision, culminating in \(m_{t} = 173.06 \pm 0.84\,\textrm{Ge}\hspace{-1.00006pt}\textrm{V} \) [42]. We note that the precision is much better than the optimistic \(\sqrt{\mathcal {L}}\) projection of the total uncertainty obtained from the 8\(\,\textrm{Te}\hspace{-1.00006pt}\textrm{V}\) measurement, which gives 1.4\(\,\textrm{Ge}\hspace{-1.00006pt}\textrm{V}\). It is even better than the \(\sqrt{\mathcal {L}}\) scaling of the statistical uncertainty only, which results in a projected uncertainty of 0.94\(\,\textrm{Ge}\hspace{-1.00006pt}\textrm{V}\). This example prominently highlights the improvements in precision possible when developing advanced data analysis strategies, information from auxiliary measurements, and experimental calibration methods.

3 Electroweak gauge boson interactions

3.1 Rare electroweak processes: A \(tW\!Z \) case study

Processes involving top quarks (t) and EW gauge bosons (W, Z, \(\gamma \)) reveal high sensitivity to BSM effects [45] and allow to probe the top-EW interaction, which is only poorly constrained by experimental data [46, 47]. The study of the process \(pp \rightarrow tW\!Z \) is crucial for probing the weak couplings of the top quark. This process is particularly sensitive to unitarity-violating behaviour due to modified electroweak interactions, setting it apart from other electroweak top production processes. Unlike the dominant QCD-induced production modes like \(t\overline{t}Z\), which primarily experience rate rescalings from operators, \(tW\!Z\) offers a unique perspective due to its sensitivity to the weak interactions of top and bottom quarks, and the self-interactions of \(\text {SU}(2)\) gauge bosons. This sensitivity is not as pronounced in related processes such as \(tW\!j \) and \(tZ\!j \), making \(tW\!Z \) a more distinct and effective probe. Additionally, \(tW\!Z \) is not influenced by top quark four fermion operators at the tree level, further enhancing its significance as a complementary tool for exploring new interactions in the top quark sector [48].

The first evidence of the standard model production of a top quark in association with a W and a Z boson in multi-lepton final states was reported by CMS [49], using data from 2016 to 2018, with an integrated luminosity of \(138\,{\textrm{fb}}^{-1} \). The measured cross section was found to be \(354 \pm 54 \mathrm {(stat)} \pm 95 \mathrm {(syst)}\) \(\,\textrm{fb}\), with an observed significance of 3.4 standard deviations, compared to an expected significance of 1.4 standard deviations.

As a rare electroweak process that probes a range of relevant top-related interactions, the sensitivity of \(tW\!Z\) production to SMEFT operators at the HL-LHC has been studied in Ref. [50]. Couplings that are probed in the top sector are particularly relevant in scenarios of partial top compositeness [51] or general vector-like quark extensions [52]. To highlight the potential of this rare process going forward with detailed kinematic information becoming increasingly accessible, we consider the \(\mathcal {O}_{tZ}\) and the \(\mathcal {O}_{\phi Q}^3\) operators as defined in the SMEFT@NLO [53] model with

$$\begin{aligned} \mathcal {O}_{tZ} = -\sin \theta _W\mathcal {O}_{tB} + \cos \theta _W\mathcal {O}_{tW} \,. \end{aligned}$$
(3.1)

The operators on the right-hand side refer to the Warsaw basis convention [54] and \(\theta _W\) is the weak mixing angle. Although not an exhaustive list, these couplings provide relevant and representative deformations from the expected \(tW\!Z \) SM outcome, and the \(tW\!Z \) process is particularly sensitive to these types of operators. The similarity of the final state to the \(t\overline{t}Z \) process complicates the experimental identification of \(tW\!Z \) significantly. For this reason, we study the effect of the \(\mathcal {O}_{tZ}\) and \(\mathcal {O}_{\phi Q}^3\) operators for both processes simultaneously.

To simulate the two processes we use Madgraph5 aMC@NLO [55], while the EFT effects are taken into account using the SMEFT@NLO model. We generate \(pp \rightarrow tW\!Z \) and \(pp \rightarrow t\overline{t}Z \) at \(\sqrt{s} = 13\,\textrm{Te}\hspace{-1.00006pt}\textrm{V} \) under the SM assumption, where the simulation of \(tW\!Z\) makes use of the diagram removal technique [56, 57]. Then the events are reweighted to different BSM points by simulating the effects of the previously mentioned EFT operators for different values of the Wilson coefficients using the reweighting method [58]. The events are generated at next-to-leading order in QCD, while the parton shower is modelled using PYTHIA8 [59]. The new physics scale is chosen to be \(\Lambda =1\,\textrm{Te}\hspace{-1.00006pt}\textrm{V} \) and including quadratic terms \(\sim \Lambda ^{-4}\) to the analysis at this point to highlight the power of high-energy search regions [48].

Fig. 3
figure 3

Constraints from a fit to \(t\overline{t}Z\) and \(tW\!Z\) simulated data, as described in the text, on the SMEFT operators \(\mathcal {O}_{tZ}\) and \(\mathcal {O}_{\phi {}Q}^{(3)}\), for \(\Lambda =1\,\textrm{Te}\hspace{-1.00006pt}\textrm{V} \). The blue-shaded area shows constraints from inclusive measurements, while the red-shaded area refers to constraints including differential \(p_{\text {T},Z}\) measurements. The outermost blue area bounded by the dashed line corresponds to expected constraints from the CMS Run 2 measurement of \(tW\!Z\) production [49]

We simulate the \(t\overline{t}Z\) +\(tW\!Z\) measurement by including realistic efficiencies and acceptances from the CMS experiment. Backgrounds from diboson and \(t\overline{t} {}+X\) production, as well as backgrounds from misidentified leptons (non-prompt backgrounds), are estimated from the recent CMS analysis [49]. Our analysis considers three- and four-lepton final states. For each final state, the events are separated into a \(tW\!Z\) signal region (SR) and a \(t\overline{t}Z\) control region (CR). The event yields are obtained using the CMS reconstruction and identification efficiencies, most importantly those for electrons [60] and muons [61]. The signal acceptances in the SR and CR, as well as systematic uncertainties, are estimated from Ref. [49]. With this setup, we reproduce the CMS results in terms of signal yields and signal strengths for \(t\overline{t}Z\) and \(tW\!Z\) within about 10%.

Interpreting these results in the context of the SMEFT, the results are shown in Fig. 3. The dashed line corresponds to the expected limits at 95% confidence level on \(c_{tZ}\) and \(c_{\phi Q}^3\) from the CMS analysis with an integrated luminosity of \(138\,{\textrm{fb}}^{-1} \). Four regions enter this analysis, the SRs and CRs of the three- and four-lepton final states. Subsequent blue shading corresponds to increasing the luminosity using the 138\(\,{\textrm{fb}}^{-1}\) results as the baseline. Besides the higher event counts, the systematic uncertainties are also reduced following an approximate \(\sqrt{\mathcal {L}}\) scaling. Given the nature of the included coupling modifications, the inclusive selection’s sensitivity plateaus when uncertainties at low momentum transfers dwarf the relative deviation from the SM. This process therefore is a prime example of the \(\sqrt{\mathcal {L}} \) scaling being badly broken by the nature of the physics that characterises the BSM scenario: Already at 300\(\,{\textrm{fb}}^{-1}\) we expect to have enough data to populate a differential fit of these Wilson coefficients, thereby avoiding the sensitivity loss of the inclusive selection. The differential analysis is performed in the distribution of Z boson \(p_\textrm{T}\), with a bin size much larger than the experimental resolution such that resolution effects can be neglected. Increasing the luminosity then gains further statistical and likely systematic control, improving the separation between \(t\overline{t}Z\) and \(tW\!Z\), thus enabling much tighter constraints than the \(\sqrt{\mathcal {L}} \) scaling of the inclusive selection could provide (red shading in Fig. 3). It is worth noting that the constraints on \(c_{tZ}\) are comparably weak in marginalised fits so far, see e.g. Ref. [62]. A differential measurement of the combined \(t\overline{t}Z\) +\(tW\!Z\) processes can alleviate this and is likely to dramatically improve the sensitivity to this operator.

4 Higgs physics

4.1 Higgs property measurements

Effective Field Theory has become the consensus for reporting and parametrising sensitivities in the electroweak sector, particularly concerning the Higgs boson. Deviations from the Higgs boson’s SM phenomenology are parametrised by effective operators that reminisce the existence of BSM physics at higher mass scales. The large number of operators [54] results in the same number of unconstrained Wilson coefficients. Because some of the operators exhibit correlated effects on measurable quantities, a parametrisation in the Wilson coefficients results in phenomenologically insensitive directions in the high-dimensional parameter space when constraints are derived from data. A famous example of this is gluon-fusion Higgs production, which leads to a blind direction \(c_{\Phi G} = - {\alpha _s \over 12 \pi y_t} c_{t\Phi }\). The effective interaction between the Higgs field and the gluon can not be distinguished from the effective interaction between the Higgs field and the top quark, because of the loop-induced production of the Higgs boson. The blind direction of inclusive measurements is broken when the Compton wavelength of the top quark is experimentally resolved, e.g. through \(H\) +jet and \(t\overline{t} H \) production. The available differential information from these processes directly breaks the \(\sqrt{\mathcal {L}} \) scaling that enhances sensitivity but never resolves the degeneracy present in the inclusive rate [63, 64]. As shown in Ref. [65] for example, this discrimination is stable against expected theoretical uncertainties.

Fig. 4
figure 4

Constraints from a global fit, as described in the text, on the SILH-basis operators \(\mathcal {O}_W\) and \(\mathcal {O}_{HW}\). The blue-shaded area shows constraints from inclusive measurements, while the red-shaded area refers to constraints including differential \(p_{\text {T},H}\) measurements. Evaluating these constraints at various integrated luminosities, ranging from 100\(\,{\textrm{fb}}^{-1}\) to 3000\(\,{\textrm{fb}}^{-1}\) highlights that differential measurements are needed to resolve the blind direction residual to inclusive measurements, thereby improving in the direction \(c_W \simeq -c_{HW}\) over any anticipated \(\sqrt{\mathcal {L}} \) scaling

A similar yet slightly more dramatic behaviour is resolved when non-trivial momentum dependencies are present. This is particularly highlighted by the interplay of the \(O_W\) and \(O_{HW}\) operators in the SILH convention [66] that parametrises non-SM momentum dependencies of the Higgs and gauge bosons [67, 68].

We follow the analysis in Ref. [69] and perform a global fit, including all significant Higgs boson production and decay modes. To calculate the Higgs boson yields, we rely on the narrow-width approximation

$$\begin{aligned} \sigma (pp \rightarrow (H \rightarrow YY)+X){} & {} = \sigma (pp \rightarrow H+X)\nonumber \\{} & {} \quad \textrm{BR}(H\rightarrow YY), \end{aligned}$$
(4.1)

where X represents any associated reconstructed object, i.e. jets, top quarks or gauge bosons. To deform the Standard Model, we include the 8 operators \(\bar{c}_H\), \(\bar{c}_{u,3}\), \(\bar{c}_{d,3}\), \(\bar{c}_W\), \(\bar{c}_{HW}\), \(\bar{c}_{HB}\), \(\bar{c}_\gamma \) and \(\bar{c}_g\). We restrain the analysis to genuine dimension six effects that arise from the interference of the dimension six amplitude with the Standard Model, i.e.

$$\begin{aligned} |\mathcal {M}|^2 = |\mathcal {M}_{\textrm{SM}}|^2 + 2~\textrm{Re} \left\{ \mathcal {M}_\textrm{SM} \mathcal {M}^*_{d=6} \right\} + \mathcal {O}(1/\Lambda ^4). \end{aligned}$$
(4.2)

Including Higgs production modes, where the Higgs boson is produced in association with jets, top quarks or gauge bosons, results in a finite transverse momentum distribution. Thus, the fit can explore the sensitivity of Higgs boson measurements in exclusive phase space regions. We bin the Higgs boson’s transverse momentum in each production channel in five 100 GeV bins, from 0 to 500 GeV. To tension the theoretical predictions with experimental data, we rely on measurements obtained by ATLAS and CMS and their respective systematic and theoretical uncertainty projections. Concretely, we include unfolded \(p_{\text {T},H}\) distributions for the production processes \(pp \rightarrow H\), \(pp \rightarrow H+j\), \(pp \rightarrow H+2j\), \(pp \rightarrow \bar{t}tH\) and \(pp \rightarrow VH\). Decay modes of interest, included in the fit, are \(H \rightarrow \bar{b}b\), \(H \rightarrow \gamma \gamma \), \(H \rightarrow \tau ^+ \tau ^-\), \(H \rightarrow 4\,l\), \(H \rightarrow 2\,l 2\nu \), \(H \rightarrow Z\gamma \) and \(H \rightarrow \mu ^+ \mu ^-\). For bins to be included in the fit, we require

$$\begin{aligned} N_{\textrm{events}} = \epsilon _p \epsilon _d \sigma (H+X) \textrm{BR}(H\rightarrow YY) \mathcal {L} \gtrsim 5~, \end{aligned}$$
(4.3)

where \(\epsilon _p\) and \(\epsilon _d\) refer to the reconstruction efficiencies specific to the respective production and decay modes. A detailed description of efficiencies, acceptances, and systematic uncertainties is given in Ref. [69]. Thus, depending on the integrated luminosity \(\mathcal {L}\), more or less independent measurements are included in this global fit. For 300\(\,{\textrm{fb}}^{-1}\) we find 88 and for 3000\(\,{\textrm{fb}}^{-1}\) 123 measurements satisfying Eq. (4.3). Thus, this already showcases the growing amount of information from exclusive phase space regions with the increase of integrated luminosities.

A comprehensive analysis for all Wilson coefficients can be found in [65, 69]. In Fig. 4, we visualise the constraints obtained for \(O_W\) and \(O_{HW}\), while marginalising over all the other operators. The blue contour shows the inclusive measurements, i.e. only considering overall rates without considering the differential binning of \(p_{\text {T},H}\). The shading refers to different integrated luminosities. The constraints considering the differential distribution of the Higgs boson are shown in red. The inclusive measurement can never resolve the inclusive blind direction for \(c_W \simeq -c_{HW}\), evidenced in Fig. 4. Differential measurements that access the more exclusive phase space region significantly enhance these directions’ sensitivity and discrimination beyond what a \(\sqrt{\mathcal {L}} \) scaling forecasts.

5 Summary

Extrapolations of sensitivity estimates are crucial in shaping the particle physics roadmaps and phenomenological programmes. Overly pessimistic expectations, albeit constituting a conservative point of reference, can therefore be detrimental to experimental as well as theoretical progress. Projections based on existing analyses give sensitivity improvements that scale with the square root of the luminosity (\(\sqrt{\mathcal {L}} \)) [1]. While these projections are conceptually correct, the failure to include important aspects of future analyses can lead to a severe underestimation of the achievable sensitivity. In this work, we challenge the longstanding assumption that sensitivity in particle physics experiments, particularly at the High-Luminosity Large Hadron Collider (HL-LHC), scales with \(\sqrt{\mathcal {L}} \). Our analysis of representative examples demonstrates that this \(\sqrt{\mathcal {L}} \) scaling is exceedingly conservative, especially in the context of the HL-LHC’s advanced capabilities in utilising exclusive final states and advanced reconstruction methods.

Concretely, we focused on three key areas: top quark physics, rare electroweak processes, and Higgs property measurements. These examples reveal that more differential measurements, the study of rare processes not experimentally accessible so far, increasingly refined search strategies, and advanced analysis techniques substantially enhance the experimental sensitivity, surpassing the traditional \(\sqrt{\mathcal {L}} \)-scaling predictions. In the realm of top quark physics, our findings indicate that the sensitivity, not only to searches for new physics in large invariant-mass final states but also in the measurement of fundamental standard model parameters, can increase significantly, providing deeper insights into the top quark’s properties and interactions. Our investigations into rare electroweak processes have evidenced the potential for discoveries beyond the standard model. The enhanced sensitivity in these processes could lead to observations of new phenomena, offering a window into physics beyond our current theoretical framework. Furthermore, for Higgs property measurements, the application of new analysis methodologies has shown potential for more precise determinations of Higgs boson characteristics, a cornerstone for understanding the standard model of particle physics and beyond.

Albeit these are only examples chosen to illustrate the impact of improved and adapted data analysis strategies, they representatively demonstrate, for a wide range of applications that the success of the HL-LHC - and therefore the entire high-energy physics programme - may well hinge on the use of novel reconstruction techniques, e.g. jet substructure observables, machine-learning and matrix-element methods, and the focus on signal-rich exclusive phase space regions. This poses well-known challenges for the experimental community in accessing such information through advances in tracking, calibration and particle reconstruction, for example. Furthermore, theoretical progress is crucial for the reduction of modelling and theory uncertainties, which are limiting the sensitivity of high-precision measurements already today. Thus, a wide effort from the experimental and theoretical particle physics communities is dedicated to the development of novel tools, calculations, and advanced data analysis methods, already showing a breaking of the \(\sqrt{\mathcal {L}}\) scaling in present analyses.

Therefore, the implications of these findings suggest a change in how sensitivity is estimated for future collider experiments, by broadening these studies with unexplored final states, more differential measurements, and modern analysis techniques as more data becomes available. Our research indicates that the HL-LHC could be significantly more potent in probing the fundamental aspects of particle physics than previously anticipated. Thus, the findings dispel the myth of \(\sqrt{\mathcal {L}} \) scaling and call for reevaluating experimental strategies and data analysis techniques, encouraging the scientific community to look beyond conventional assumptions and explore the full potential of the HL-LHC.