Leveraging on-shell interference to search for FCNCs of the top quark and the Z boson

Flavour-changing-neutral currents (FCNCs) involving the top quark are highly suppressed within the Standard Model (SM). Hence, any signal in current or planned future collider experiments would constitute a clear manifestation of physics beyond the SM. We propose a novel, interference-based strategy to search for top-quark FCNCs involving the Z boson that has the potential to complement traditional search strategies due to a more favourable luminosity scaling. The strategy leverages on-shell interference between the FCNC and SM decay of the top quark into hadronic final states. We estimate the feasibility of the most promising case of anomalous tZc couplings using Monte Carlo simulations and a simplified detector simulation. We consider the main background processes and discriminate the signal from the background with a deep neural network that is parametrised in the value of the anomalous tZc coupling. We present sensitivity projections for the HL-LHC and the FCC-hh. We find an expected 95% CL upper limit of Bexcl(t→Zc)=6.4×10-5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}_{\textrm{excl}}(t\rightarrow Zc) = 6.4 \times 10^{-5}$$\end{document} for the HL-LHC. In general, we conclude that the interference-based approach has the potential to provide both competitive and complementary constraints to traditional multi-lepton searches and other strategies that have been proposed to search for tZc FCNCs.


Introduction
A avour-changing-neutral-current (FCNC) process is one in which a fermion changes its avour without changing its gauge quantum numbers.In the Standard Model (SM), FCNCs are absent at tree level, suppressed by Cabibbo-Kobayashi-Maskawa (CKM) elements, and potentially additionally suppressed by fermion mass-di erences at loop level via the Glashow-Iliopoulos-Maiani (GIM) mechanism [1].The SM predictions for FCNCs that involve the top quark are extremely small due to the highly e ective GIM suppression.The resulting branching ratios (B) for the top-quark two-body decays via FCNCs range from B(t → uH) SM ∼ 10 −17 to B(t → cg) SM ∼ 10 −12 [2][3][4][5][6][7].However, the top quark plays an important role in multiple theories beyond the SM due to its large coupling to the Higgs, which is relevant for models addressing the Hierarchy Problem and models for electroweak-scale baryogenesis.Several of these models predict enhanced top-quark FCNC couplings [4,[8][9][10][11][12], which we collectively denote here by g.Typically, constraints on g from low-energy and electroweak-precision observables are mild [13][14][15][16][17][18], motivating direct searches for FCNC top-quark decays (t → qX with q = u, c) and FCNC single-top-quark production (pp → tqX or qX → t).While we focus on FCNC interactions with SM bosons in this paper, FCNC interactions of the top quark with new, scalar bosons have been proposed [19] and searched for [20].
Using data taken at the LHC, the ATLAS and CMS collaborations have placed the most stringent upper limits on top-quark FCNC interactions via a photon [21,22], Z boson [23,24], Higgs boson [25,26], and gluon [27,28].Even though many searches take advantage of both the FCNC decay and single production to search for a non-zero g, the limits are traditionally presented in terms of FCNC branching ratios, B(t → qX).The most stringent limits at 95% con dence level (CL) range from B(t → uγ) < 8.5 × 10 −6 [21] to B(t → cH) < 7.3 × 10 −4 [26].For FCNCs via the Z boson, the most stringent limits are obtained in a search that uses the decay of the Z boson to e + e − or µ + µ − in association with a semileptonically decaying top quark [23].The resulting 95% CL upper limits on g translate to B(t → uZ) < 6.2-6.6 × 10 −5 and B(t → cZ) < 1.2-1.3× 10 −4 , depending on the chirality of the coupling.
While the limits in Ref. [23] are obtained with L int = L dt = 139 fb −1 of data at √ s = 13 TeV, the HL-LHC is expected to provide approximately 3000 fb −1 at 14 TeV.Improved sensitivity to top-quark FCNC processes is hence expected at the HL-LHC, because statistical uncertainties play an important role in these searches.With systematic uncertainties being subdominant, one may naively expect that the upper limits on B(t → qZ) scale with the shrinking statistical uncertainty. 1 Using this extrapolation, the sensitivity is expected to improve roughly by a factor √ 3000 fb −1 /139 fb −1 ≈ 5 at the HL-LHC. 2 The reason for this luminosity scaling is that the partial width for the two-body top-quark FCNC decay and the cross section for FCNC single production are proportional to g 2 due to the lack of interference with SM processes. 3As a result, the sensitivity to B(t → qX) naively scales as 1/ √ L int and the sensitivity to g as 1/ 4 √ L int .Finding instead an observable that scales linearly with g due to interference with the SM would modify favourably the luminosity scaling.Such an interference-based approach would hence be very useful for the search for top-quark FCNCs.In the present work we propose such a novel approach and investigate the feasibility of employing it to search for tZq.There are multiple, phenomenologically relevant examples in which New-Physics (NP) interference with the SM is instrumental for precision NP searches.Examples include searching for H → cc via exclusive Higgs decays, which makes use of interference with the SM H → γγ amplitude [31], or searching for NP in high-energy diboson distributions by exploiting the interference between the SM and energy-enhanced NP contributions from dimension-six operators [32,33].Here, we introduce a new setup that can be applied to improve top-quark FCNCs searches.As opposed to other approaches, here both NP and SM amplitudes will be mostly resonant, i.e., contain on-shell -but di erent-intermediate particles.At tree level, a resonant signal amplitude does not generally interfere with a continuum amplitude, because the former is imaginary and the latter is real.However, if both the signal and the background contain an on-shell particle, interference may occur, as long as the nal state is identical. 4In this case of on-shell interference, NP and SM amplitudes will still interfere, yet the interference will only be large in a restricted phase-space region.This potential caveat is di erent to the ones in the aforementioned examples: exclusive decays of the Higgs boson are suppressed by the hadronisation probability to the relevant nal-state, e.g., J/ψ, and the interference in diboson tails is suppressed with the decreasing SM amplitude.Our proposal is to search for the three-body decay t → qb b in the phase-space region in which there is potentially large NP-SM interference.
The decay t → qb b contains two interfering contributions: the NP contribution t → qZ → qb b and the SM one t → bW + → qb b, as illustrated in gure 1.Consequently, the partial width contains a part that is proportional to g.For su ciently small g the interference term dominates over the NP 2 term (∝ g 2 ) in which case the sensitivity to g is expected to scale like 1/ √ L int , i.e., it improves faster with increasing luminosity than the traditional approach without interference.The interference argument also holds for probing the top-quark FCNCs with the Higgs boson (tHq) or with photons (tqγ) and gluons (tqg).For the Higgs, the interference is suppressed by the light-quark masses of the nal-state quarks (m b and m q ) due to the di erent chirality structure of the SM (vector) and NP (scalar) couplings.For the photon and gluon FCNCs the SM amplitudes peak at small dijet invariant masses with potentially large QCD backgrounds, which require a dedicated study.We will thus focus in this work on top-quark FCNCs with the Z-boson.We stress that the interference signal is not only sensitive to the magnitude of the tZq coupling but is also sensitive to its phase.The interference approach is hence inherently complementary to the traditional FCNC searches and of particular interest in case signs of an anomalous tZq coupling are observed.We will also focus on the tZc coupling, because the interference is larger compared to tZu due to the larger CKM matrix element In section 2, we establish the theory framework and discuss how to leverage interference based on parton-level expressions for the interference-based rate and its kinematic properties.In section 3, we introduce the Monte Carlo (MC) samples that we use for the sensitivity estimate and discuss the event selection that is tailored towards the FCNC signal.In section 4.1, we brie y introduce the setup of the statistical analysis and then describe in section 4.2 the optimization of the parametrised deep neural network (DNN) that we use for the analysis of the simulated data.The results are given in section 4.3 for the HL-LHC and in section 4.4 for the FCC-hh.We present our conclusions in section 5.
2 t → cZ from on-shell interference in t → cb b The focus of this section is to study the three-body top-quark decay t → cb b in the presence of an anomalous, NP tZc coupling with emphasis on how to take advantage of NP-SM interference to probe the NP coupling.The decay rate is a ected by interference between the NP and SM amplitudes, illustrated in the left and right diagram in gure 1, respectively.The results of this section are equally well applicable to the t → ub b decay when an anomalous tZu coupling is present.However, this channel is less promising to provide competitive constraints from an interference-based analysis since the SM amplitude is highly CKM suppressed.We, thus, concentrate on the t → cb b case.
Given the smallness of the bottom and charm-quark masses with respect to the top-quark mass, the NP-SM interference is large when the chirality of the NP couplings is the same as the one of the SM W -boson contribution, i.e., left-handed vector couplings tL γ µ c L Z µ .In contrast, the NP-SM interference is suppressed by the small b-and c-quark masses if the NP originates from right-handed vector or tensor operators.Therefore, we only consider here the most promising case of anomalous left-handed couplings.The Standard Model E ective Theory (SMEFT) parametrises these couplings in terms of two dimension-six operators Here, ϕ is the Higgs doublet, q p left-handed quark-doublets, and p, r avour indices in the conventions of Ref. [35].
In the broken phase, by rotating to the quark-mass eigenstates these SMEFT operators can lead to anomalous tree-level tZc couplings to the left-handed quarks, which are the subject of this work.We parametrise them with the phenomenological Lagrangian with the NP parameter g > 0 and the NP phase 0 ≤ φ NP < 2π. 5 In the up-quark mass basis, the coupling in Eq. ( 2) is related to the SMEFT Wilson coe cients via ϕq;32 − C ϕq; 32 , where e is the electromagnetic coupling, s w (c w ) the sine (cosine) of the weak mixing angle, and v 246 GeV the electroweak vacuum-expectation value.
The squared amplitude for the t → cb b decay contains three terms: the SM 2 term, the NP 2 term, and their interference, i.e.,  ."Pure SM" events predominantly populate the vertical blue region whereas "pure NP" events the horizontal green region.The red region marks the doubly-on-shell region in which NP-SM interference is the largest.In (b) and (c), we show the rate originating from NP-SM interference proportional to g cos φ and g sin φ, respectively.The gure ranges correspond to the doubly-on-shell region (red region in (a)) and the dotted rectangle centered at the doubly-on-shell point has the width Γ W and the height Γ Z .Brown regions correspond to negative and green to positive contributions to the branching ratio.
where the underbraces indicate the dependence on the NP parameters.The interference term depends linearly on the NP coupling g and also on the relative, CP-violating phase between NP and SM contribution: As indicated by Eq. ( 3) and further discussed in the following, the fully di erential rate of t → cb b, is sensitive to the interference term and thus potentially sensitive to both a term that is CP-even in the kinematic variables and proportional to cos φ as well as a term that is CP-odd and proportional to sin φ.The cases φ = {0, π} lead to a di erential rate of t → cb b that is CP conserving.In this case, namely, the SM and NP sources of CP violation are aligned and the di erential rate is insensitive to CP violation.
The coupling-scaling of the amplitudes does not capture the dependence on the kinematic variables describing the three-body decay.This dependence is essential for designing the search that leverages interference in an optimal manner.The t → cb b kinematics are fully speci ed by the two invariant masses m 2 The di erent topologies of the NP and the SM amplitudes (compare the two diagrams in gure 1) lead to nal states with distinct kinematic con guration: "SM events" originate mostly from on-shell W 's, i.e., m c b ∼ M W , whereas "NP events" from on-shell Z's, i.e., m b b ∼ M Z .We illustrate this in gure 2a, which shows the standard Dalitz plot for the three-body decay in the top-quark rest frame. in terms of m c b and m b b.The gray area marks the kinematically allowed phase-space.The SM 2 and NP 2 parts of the squared amplitude mainly populate the blue (vertical band) and green (horizontal band) regions, respectively.
The W -and Z-boson widths (Γ W , Γ Z ) control the level of deviations from the on-shell case, i.e., the width of the vertical and horizontal bands in gure 2a.This is best seen by employing the Breit-Wigner approximation for the massive vector propagators which enhances the SM amplitude when m c b ∼ M W and the NP one when m b b ∼ M Z .By integrating over the full phase-space and taking the narrow-width approximation Γ W /M W , Γ Z /M Z 1, we recover the usual relations for the fully inclusive branching ratios originating from the SM 2 and NP 2 terms in Eq. (3): with We collect the expressions for the two-body branching fractions in appendix A. However, as we shall demonstrate next, the interference is large in the small phase-space region in which both W and Z bosons are on-shell (red region in gure 2a): Explicit computation shows that the NP 2 and SM 2 rates in this doubly-on-shell region are parametrically suppressed by the widths and masses of the Z/W bosons with respect to their inclusive values in Eq. ( 6) The net e ect is that in total B doubly on-shell NP/SM are neither enhanced by M Z/W /Γ Z/W nor suppressed by The relative suppression, however, is welcome as both of these contributions constitute a background for the interference-based analysis we are proposing.
In contrast to "pure SM" and "pure NP" events, "interference-based" events predominantly populate the doubly-on-shell phase-space region, since 2Re(A * SM A NP ) is proportional to the product of W -and Z-boson Breit-Wigner propagators.Summing over nal-state polarisations and averaging over the top-quark polarisation we nd the double-di erential branching ratio originating from the interference term in Eq. ( 3) to be The last line de nes a shorthand notation for the terms proportional to g cos φ and g sin φ.In gures 2b and 2c we show d 2 B cos Int and d 2 B sin Int , respectively, in terms of the two Dalitz variables.In brown are the regions with a negative rate and in green the ones with positive rate.The intersection of the dotted vertical and horizontal line corresponds to the doubly-on-shell point and we have overlaid a rectangle with width and height equal to Γ W and Γ Z .Eq. ( 9) and its illustration in gures 2b and 2c contain the most relevant parametric dependences that underpin the idea of leveraging interference to probe anomalous tZc couplings.
i) The denominator in the rst line stems from the product of the two Breit-Wigner propagators for the W and Z bosons, see Eq. ( 5).They enhance the rate from interference in the doubly-on-shell region, which is regulated by both Γ W and Γ Z .The enhancement of the doubly-on-shell region with respect to the rest of the phase-space region is best seen in gures 2b and 2c for d 2 B cos Int and d 2 B sin Int .The main part of the integrated rate comes from the phase-space region close to the doubly-on-shell region.
ii) The rate from interference contains terms proportional to both cos φ and sin φ.Interference is present independent of whether there is CP violation in the decay (sin φ = 0) or whether there is no CP violation (cos φ = ±1).However, the CP-odd term proportional to sin φ is odd under the interchange of W ↔ Z and m b b ↔ m c b in Eq. ( 9), see also gure 2c for d 2 B sin Int .The consequence is that the integrated rate proportional to g sin φ vanishes for the symmetric case M W = M Z .A measurement of the phase φ thus requires separating events within the doubly-on-shell region, which is experimentally extremely challenging given the jet energy resolution.In contrast, the integrated rate proportional to g cos φ is even under the aforementioned interchanges and does not vanish after integration, see gure 2b for d 2 B cos Int .A dedicated search in the doubly-on-shell region is thus potentially sensitive to g cos φ.
In section 3, we will use Monte-Carlo (MC) techniques to simulate events including a simpli ed detector simulation populating the doubly-on-shell region based on the full matrix-elements, which lead to Eq. ( 9) and the corresponding expressions for the NP 2 and SM 2 terms.To obtain a rst rough estimate of the rate from interference and to illustrate the parametric dependences we present here an approximate phase-space integration of the rate in Eq. (9).Most of the rate originates from events in the doubly-on-shell region, see i) above.We thus keep the m b b and m c b dependence in the Breit-Wigner denominators but set m b b = M Z , m c b = M W in the remaining squared amplitude.We then perform the approximate phase-space integration by integrating over the Breit-Wigner factors via to obtain a rough estimate of the integrated, interference-based rate We stress that this is only a rough approximation.In fact, the approximation overestimates the rate by a factor of two with respect to properly integrating Eq. ( 9) over the physical kinematic region and including the full m b b and m c b dependence.
As expected from the discussion in ii) above, Eq. 10 does not contain g sin φ terms.The resulting rate is positive (constructive interference) when cos φ < 0 and negative when cos φ > 0 (destructive interference), see colormap of d 2 B cos Int in gure 2b.For this reason, in the following sections, we will concentrate on the case of constructive interference by choosing While it may also be possible to search for destructive interference, i.e., a de cit of events in the doubly-resonant phase space, as for example employed in searches for heavy scalars [36,37] that decay to tt, we will not pursue this direction here.Eq. ( 10) also illustrates that B Int is not suppressed by factors of Γ W/Z /M W/Z .As discussed below Eq. ( 8), the same holds for the NP 2 and SM 2 rates in the doubly-on-shell region, B doubly on-shell NP/SM .Therefore, the interference-based rate can compete with the NP 2 rate for su ciently small g if the analysis targets the doubly-on-shell region.In what follows we investigate the experimental viability of such a dedicated search.

Simulated samples and event selection
We generated Monte-Carlo (MC) samples with MadGraph5_aMC@NLO 3.2.0(MG5) [38] using a custom UFO [39] model, which includes the contact tZc coupling as parametrised in Eq. ( 2), setting φ = π (see discussion in Eq. ( 11)), in addition to the full SM Lagrangian with non-diagonal CKM matrix.All matrix elements are calculated at leading order in perturbative QCD.We In the following, we simulate proton-proton collisions at a centre-of-mass energy of 14 TeV.The structure of the proton is parametrised with the NNPDF2.3LOset of parton distribution functions [40].Factorisation and renormalisation scales are set dynamically event-by-event to the transverse mass of the irreducible 2 → 2 system resulting from a k T clustering of the nal-state particles [41].We simulate the FCNC contribution (∝ g 2 ), also referred to as NP 2 in Section 2, and the interference contribution (∝ g) to the signal process t t → cb b µ − ν µ b separately, whereas the SM contribution to this process is treated as irreducible background.We only simulate the muon channel for simplicity.The reducible background processes always include top-quark pair production with subsequent decay in the lepton+jets channel with rst-or second-generation quarks q and q .Besides the six-particle nal state (bqq µ − ν µ b), we also simulate resonant production of additional bottom quarks from t tZ(→ b b) and non-resonant contributions from t tb b and t tcc.We do not simulate several other small background processes, such as W − + jets production, diboson production with additional jets or t tH production, because their contribution is expected to be negligible either due to their low cross section or their very di erent kinematic properties.
We only generate muons and nal-state partons with transverse momenta larger than 20 GeV and require nal-state partons to have a minimum angular distance 6 of ∆R = 0.4 to each other, motivated by the minimum angular distance obtained with jet clustering algorithms.We require the same angular distance between nal-state partons and the muon in order to mimic a muon isolation criterion.For events in the six-particle nal state, i.e., signal and background contributions to b bb µ − ν µ b as well as the reducible background bqq µ − ν µ b, we require muons and nal-state partons to be in the central region of the detector (|η| < 2.5).
For simplicity, we do not use a parton shower in our studies.Instead, we smear the parton-level objects by the detector resolution in order to approximate detector-level jets, muons, and missing transverse momentum.The jet resolution is parametrised as σ(p T )/p T = −0.334• exp(−0.067• p T ) + 5.788/p T + 0.039, where the transverse momentum, p T , is in units of GeV.We obtain this parametrisation from a t to values from the ATLAS experiment [42].We recalculate the energy of each jet based on the smeared p T with the jet direction unchanged.We smear the x-and y-components Table 1: The leading-order cross section σ MG from MG5, the k-factors, the probability to have only four jets at the LHC for the processes with a six-particle nal state, ε 4j , the fraction of simulated events passing the event selection, ε pass , the b-tag e ciency, ε btag , and the expected number of events N exp for an integrated luminosity of 3000 fb −1 for each process.t tb c denotes the irreducible SM-background contribution to the bqq µ − ν µ b nal state.The values of the interference and the FCNC contribution are given for g = 0.01 and cos φ = −1. of the missing transverse-momentum vector independently by adding a random number drawn from a Gaussian distribution with mean zero and standard deviation of 24 GeV [43].We then calculate the scalar missing transverse momentum and the corresponding azimuthal angle.We take the muon transverse momentum resolution to be 2% [44,45] with no kinematic dependence.We select events with criteria that are typical for top-quark analyses by the CMS and ATLAS collaborations.We require the muon to be in the central region of the detector (|η| < 2.5) and to have a transverse momentum larger than 25 GeV to mimic typical single-muon trigger thresholds [46,47].We do not take trigger, identi cation, or isolation e ciencies into account.We only accept events with exactly four central jets (|η| < 2.5) to reduce the contamination from the reducible background processes with higher jet multiplicity.Each jet has to have a transverse momentum larger than 25 GeV and we require the missing transverse momentum to be at least 30 GeV.
Given the signal nal state, cb b µ − ν µ b, we demand the four jets in the event to ful ll the following b-tagging criteria.We require three jets to ful ll a b-tagging criterion with a b-tagging e ciency of 70% and corresponding mis-identi cation e ciencies of 4% and 0.15% for c-jets and light jets, respectively [48].The additional fourth jet is often a c-jet and needs to pass a looser b-tagging criterion with a b-tagging e ciency of 91% and a correspondingly larger e ciency for c-jets [48].
The mis-identi cation e ciency for light jets of this looser b-tagging criterion is 5%.We choose the b-tagging selection from various combinations of b-tagging criteria with di erent b-tagging e ciencies and corresponding mis-tagging e ciencies.We choose the combination with the highest value of S/ √ S + B, where S and B are the total number of weighted events for the signal and the background contributions, respectively, as calculated by sampling of jets according to the b-tagging e ciencies for the di erent jet avours (S contains both the FCNC and interference contribution).
Instead of removing events that did not pass the b-tagging criteria, we weight events by the total b-tagging probability to avoid large uncertainties due to the limited size of the MC datasets.We weight events in samples for the six-particle nal states, where we required all four partons to be central already at generator-level, by a factor of ε 4j = 0.5, as roughly half of the events in top-quark pair production at the LHC have more than four jets due to additional radiation [49].We use k-factors to scale the MG5 leading-order cross sections of the MC samples to higher orders in perturbation theory.For the six-particle nal states associated with top-quark pair production, we use a value of 986 pb as calculated at next-to-next-to-leading order in QCD including next-to-next-to-leading logarithmic soft gluon resummation [50].For t tb b and t tcc, we use cross sections of 3.39 pb and 8.9 pb, respectively, as calculated with MG5 at next-to-leading order [51].For t tZ production, we use a cross section of 1.015 pb, which includes next-to-leading order QCD and electroweak corrections [52].Table 1 summarizes the e ciencies of the event selection, the MG5 leading-order cross sections, the k-factors, the b-tagging e ciencies, and the expected number of events for an integrated luminosity of 3000 fb −1 .
To show the detector-level distribution of the expected number of events for 3000 fb −1 we de ne the variables m W ,reco and m Z,reco in analogy to the parton-level Dalitz variables m c b and m b b (cf.section 2).For each event, the three jets with invariant mass closest to the top-quark mass form the hadronically decaying top-quark candidate.From these three jets, we assume the jet with the lowest sampled b-tag score to be the c-jet.In case of a tie, we choose the jet with the higher p T .The invariant mass of the two remaining jets is m Z,reco .We then calculate the invariant mass of the c-tagged jet combined with each of the remaining two jets of the hadronic top-quark system, and take the invariant mass closer to M W as m W ,reco .In gure 3, we show the expected number of events for 3000 fb −1 in the two-dimensional plane spanned by m W ,reco and m Z,reco originating from di erent contributions: in 3a events from the pure FCNC contribution, in 3b events from constructive intereference, in 3c events from destructive interference, and in 3d events from the sum of all background processes.The results in gures 3b and 3c are in qualitative agreement with the parton-level result proportional to g cos φ shown in gure 2b.Compared to it, the distributions are more spread out due to the nite detector resolution.However, the characteristic di erences between pure FCNC, interference, and background contributions are still visible.

Sensitivity at hadron colliders
Next, we estimate the sensitivity of the interference-based approach to the tZc FCNC coupling in the form of expected upper limits on the coupling constant g and compare it with the traditional approach that focuses on the leptonic decay of the Z boson.The statistical methodology is brie y outlined in section 4.1.To separate the FCNC signal, i.e., the pure FCNC contribution, as well as the interference contribution, from the background, we use a classi er based on deep neural networks (DNN).We parametrise the DNN as a function of the FCNC coupling g for optimal separation over a large range of coupling values.In section 4.2, the architecture and the optimisation of the DNN are explained.The prospects at the HL-LHC are presented in section 4.3, and section 4.4 contains estimates for the sensitivity to g in various future scenarios.The section concludes with a comparison to other approaches to constrain tZc FCNC couplings in section 4.5.

Outline of the statistical methods
Our metric for the sensitivity to the tZc FCNC coupling is the 95% CL expected upper limit on g since this allows for a straightforward comparison with existing searches.The method to derive the upper limit is the following: We create pseudo-measurements by sampling from the background-only histogram assuming a Poisson distribution for the counts per bin.Motivated by the Neyman-Pearson lemma [53], we construct a likelihood-ratio test statistic, t, by comparing the bin counts from the pseudo-measurements x with the expectation values from the MC simulation under the s+b-hypothesis (b-only-hypothesis) λ s+b ( λ b ) for each pseudo-measurement: The nominal expected upper limit on the coupling strength, g excl , is derived as the median of all pseudo-measurements under the assumption of the absence of a signal with the CL s method [54].

Optimisation of the parametrised deep neural networks
Resolution e ects, in particular the jet-energy resolution, and wrong assignments of jets to the decay branches complicate the reconstruction of invariant masses at detector level and motivate the use of machine-learning techniques to optimise the separation of signal and background in a high-dimensional space.We use the following 31 variables for the training of the DNN: for the b-tagged jets, their transverse momenta, pseudorapidities, azimuthal angles, energies and the highest-e ciency b-tagging working point that the jet passes; for the single muon, its transverse momentum, pseudorapidity and azimuthal angle; for the missing transverse momentum, its magnitude and azimuthal angle.The values of all azimuthal angles φ are replaced by the combination of sin φ and cos φ due to the periodicity of the azimuthal angle.The natural logarithm is applied to all transverse momentum and energy spectra and the missing transverse momentum spectrum, as these variables have large positive tails.The dataset is split with fractions of 60% : 20% : 20% into training, validation and test sets.As a last step, all variables are studentised using y i = (y i − µ)/σ, where µ refers to the arithmetic mean of the respective variable and σ is the estimated standard deviation.Besides these 31 observables, we also use the coupling constant g as an input to the DNN, which leads to a parametrised DNN [55].The idea is to present di erent values of g to the DNN during the training so that the DNN learns the relative importance of the di erent signal contributions as a function of g.For example, for g O(0.1) the DNN should not focus on the interference contribution at all and instead concentrate on the separation of the FCNC contribution against the backgrounds.This is because the weight of the FCNC contribution exceeds that of the interference contribution by orders of magnitude in that regime.Conversely, for g O(0.001) the DNN should start to focus on the interference contribution more and more to leverage the slower decrease of the number of expected events for the interference contribution compared to the FCNC contribution.To give the DNN the possibility to learn this dependence, we further split the training and the validation set into ve strati ed subsets.Each of these subsets corresponds to a speci c value of g ∈ {0.001, 0.005, 0.01, 0.05, 0.1}.These values are chosen to cover the range around the current best exclusion limit of about 0.0126 [23].For the training, the weights of the signal events are adjusted so that for a given value of g the sum of weights in each subset corresponds to the sum of weights of the background contribution.
The constructed DNN has four output nodes: one for pure FCNC events, one for interference events with positive weight, one for interference events with negative weight, and one for background events.For the output layer, we use softmax and for the hidden layers ReLU as the activation function.We use the Adam optimiser [56] and categorical cross-entropy as the loss function.For the determination of the expected exclusion limit, a one-dimensional discriminant is constructed based on the activation α of the respective output nodes.We assign a negative prefactor to the output node corresponding to the negative interference contribution, to increase the di erence between the background-only and the signal distribution of d.The corresponding histograms of d consist of 10 equidistant bins.To account for charge-conjugated processes, the bin contents are multiplied by a factor of two.
The structure of the DNN as well as the learning rate and the batch size during the training are manually optimised based on the expected exclusion limit on the validation set.A learning rate of 0.001 and a batch size of 1000 is chosen.The nal structure of the DNN is [32,128,256,128,64,32,4], with the numbers referring to the number of nodes in the respective layer.The evolution of the expected exclusion limit during the training of the DNN are shown in gure 4a.

Prospects for HL-LHC
The integrated luminosity expected at the HL-LHC is L = 3000 fb −1 [57].Figure 4b contains the CL s values resulting from the evaluation of the DNN on the test as a function of the coupling constant g.We nd an expected upper exclusion limit at 95% CL of The corresponding nominal upper limit on the branching fraction is B excl (t → Zc) = 6.4 × 10 −5 .In the following, we highlight some of the features of the machine-learning based analysis to illustrate the employed methods.The distributions of the discriminant for g = g excl and the rejected hypothesis g = 0.02 are shown in gure 5 for the signal and the background-only hypothesis.Since the DNN is parameterised in g, the background-only distribution depends on g as well.The number of background events expected in the rightmost bins increases for g = 0.02 compared to the bin contents expected for g = g excl .This implies that the DNN adapts to the simplifying kinematics due to the decreasing importance of interference events.
In gure 6 we show both the bin contents expected for g = g excl for each background process and the shapes of the signal contributions.Since the irreducible SM background tt bc has the same nal state as the signal, the separation from signal events turns out to be rather di cult compared to the reducible backgrounds.In fact with respect to the aforementioned irreducible component, the separation of top-quark pair production with decays to only rst-and second-generation quarks, denoted by tt, can be separated better.Nevertheless, this process remains the most important background contribution due to its high cross section.
The DNN separates the signal from the three processes with an additional heavy-avour quark pair well; this can be attributed to the di erent kinematical structure due to the additional particles in the event.It should also be noted that the FCNC distribution has a slightly higher mean than the positive interference distribution.This is due to two factors: Firstly, in the vicinity of g = g excl the sum of weights of the FCNC contribution is still a bit larger than the sum of weights of the positive interference contribution.Thus, the DNN focusses on separating the FCNC events from the background events because of their larger relative impact on the loss function.Secondly, the distribution of the events in the considered phase space inherently o ers more separation power from the background for the FCNC events compared to the interference events, as visualised in the m W ,reco vs. m Z,reco plane shown in gure 3. Additionally, the mean value of the distribution for negative interference events is only slightly lower compared to the positive interference contribution, even though the de nition of the discriminant in Eq. ( 13) considers these with opposite relative signs.This validates the observation from gure 3 that the distribution of the negative-interference events in the phase space is quite spread out and thus di cult to separate from the horizontal band of the FCNC contribution in the m W ,reco vs. m Z,reco plane as well as from the similarly distributed positive-interference contribution.As the DNN is parameterised in g, the background distribution depends on g as well.The bottom panel shows the ratio of expected signal+background events divided by the number of expected background events, (S + B)/B.

Prospects for future experiments
We explore the potential of the interference-based approach based on various future scenarios.These include developments in the realms of analysis methods, detector development, and future colliders.
Improved b-tagging.The performance of b-tagging algorithms is crucial for the suppression of background contributions.This is evident when considering that the main background contribution after the event selection (see section 3) is tt → bsc µ − ν µ b, which only di ers from the signal nal state by an s instead of a b quark.Thus, we expect a gain in sensitivity with increasing light-jet rejection factors at the considered b-tagging working points.The b-tagging algorithms that provide this rejection are being constantly improved by the experimental collaborations.An approach based on Graph Neural Networks [58] has already shown increased performance in comparison to traditional approaches.To examine the e ects of improved b-tagging algorithms, the analysis is repeated with light-jet rejection rates multiplied by a factor of two.The resulting exclusion limit is This amounts to a relative improvement of the expected limit of around 9% compared to the baseline result presented in section 4.3.b enables the separation of the di erent contributions to the parton-level t → bbc decay.However, for the full process, tt → bqq µ − ν µ b, the separation power degrades due to the choice of wrong jet combinations in the reconstruction of the invariant masses and the limited jet-energy resolution.Signi cant improvements in the resolution are expected for experiments at the FCC-hh [59] based on simulation studies for calorimetry [60].To investigate the impact of this improvement, we scale the expected limit for a jet p T resolution by a factor of ½ without changing any other parameter.This results in which corresponds to an improvement of about 16%.
Improved statistical power.The FCC-hh is projected to deliver an integrated luminosity of the order of 20 ab −1 at a centre-of-mass energy of 100 TeV [59].This presents an excellent opportunity to search for tZc FCNC e ects in the realm of small coupling constants with the interference-based approach.We do not generate new MC samples for √ s = 100 TeV.Instead, we scale the event weights by a common factor of σ tt (100 TeV)/σ tt (14 TeV) ≈ 35, which is the increase of the tt cross section due to the higher centre-of-mass-energy [61], as the signal and the main background processes rely on t t production.However, we neglect any di erence in the √ s scaling of the cross sections in the presence of additional jets for the background processes.The projected exclusion limit for this scenario is hence a rough estimate.Including these changes and repeating the analysis yields a limit of which amounts to an improvement of around a factor of four.
Combination of improvements.As a last scenario, we combine all three improvements discussed above.Therefore, this scenario corresponds to a rough projection of the sensitivity at a future generalpurpose detector at the FCC-hh with signi cantly improved b-tagging algorithms and jet resolution.Retraining and evaluating the DNN on the adjusted dataset, we obtain an expected limit of This corresponds to an improvement of about a factor of seven and results in an upper limit on the branching fraction of B comb excl (t → Zc) = 1.2 × 10 −6 .

Comparison to other approaches
We compare the sensitivity of the interference-based approach to other approaches that target tZc FCNC e ects.We brie y introduce three alternative approaches and then discuss the relative sensitivities of the di erent methods.
Leptonic analysis.Traditionally, tZq FCNCs are searched for by using the leptonic Z → + − decay mode instead of the hadronic decay Z → bb.This leads to three-lepton nal states for the signal, which are associated with low SM-background contributions.Ref. [23] provides the tightest expected exclusion limit for B(t → Zc) of 11 × 10 −5 to date.It considers both single-top quark production via an FCNC tZc vertex (qg → tZ7 ) and top-quark pair production with an FCNC decay of one of the top quarks.Using the simple scaling introduced in section 1, we obtain an expected exclusion limit for Here, we have taken the limit for a left-handed coupling, just as in our studies, and have assumed that systematic uncertainties will reduce according to the same scaling as the statistical uncertainties with the increase in integrated luminosity.This simple projection shows some tension with the extrapolation in Ref. [30] of the search for tZc FCNC e ects with 36.1 fb −1 at √ s = 13 TeV [29] by the ATLAS collaboration, which gives an expected upper limit of 4 to 5 × 10 −5 for the HL-LHC, depending on the assumptions on the reduction of systematic uncertainties.This limit is looser than the one obtained from the scaling above.This hints at the importance of the correct estimation of the long-term reduction of systematic uncertainties and highlights that the assumption that systematic uncertainties decrease according to the same scaling as statistical uncertainties may indeed be over-optimistic for the leptonic approach.The extrapolation to the FCC-hh scenario results in an expected limit of 1.6 × 10 −6 , where we again have used an integrated luminosity of 20 ab −1 and included a factor of 35 for the increase of the cross sections with √ s, based again on the scaling of the t t cross section.
This projection is probably optimistic and we regard it as a rough estimate.In particular, the factor of 35 is unlikely to capture the increase of the cross section of the FCNC production mode accurately.Additionally, this scaling implies a reduction of systematic uncertainties by a factor of more than 15, which does not seem realistic given the challenging experimental conditions at the FCC-hh.
Ultraboosted approach.In Ref. [62], it was proposed to search for top-FCNC e ects in tγ and tZ production in the ultraboosted regime in which the decay products of the top quark merge into a single jet.In contrast to our approach, this method is only sensitive to the production mode.The ultraboosted approach is projected to yield an exclusion limit of B(t → Zc) < 1.6 × 10 −3 at the HL-LHC, 8 considering a single source of systematic uncertainty on the number of background events of 20% [62].The projected limit for the FCC-hh is 3.5 × 10 −5 [62]. 9riple-top-quark production.Another way to search for top-quark FCNC e ects is in triple-topquark production: qg → tB * with B * → t t [63][64][65][66].In this process, a single top quark is produced alongside an o -shell boson B * mediating the FCNC, which splits into a t t pair.The studies are performed for the same-sign lepton topology ν + b q q b ν + b, which bene ts from the fact that SM background contributions are small.However, as is also the case for ultraboosted tZ production, the expected limit on B(t → Zc) of 1.35 × 10 −2 at the HL-LHC [65] is relatively weak and has already been surpassed by analyses from the ATLAS [23] and CMS collaborations [24] using the leptonic analysis.The limit achievable at the FCC-hh is estimated to be 4.6 × 10 −4 [66].
Table 2: Expected 95% CL limits for the HL-LHC and FCC-hh scenarios for the presented interferencebased approach, the approach with leptonic Z → + − decay (scaled based on [23]), the ultraboosted approach [62], and triple-top-quark production in the same-sign lepton channel [65,66].The limits for the ultraboosted and the triple-top approaches from the references are scaled by 1/ √ 2 to account for our assumption that roughly 20 ab −1 will be available at the FCC-hh.Discussion.We summarise the expected limits of the individual approaches in table 2. The leptonic analysis yields the most stringent limit at the HL-LHC, while both the ultraboosted and triple-top approaches perform signi cantly worse than the interference-based method.This is to be expected since these two approaches use the production mode that is suppressed by the charm-quark parton distribution function.Our projected limit for the interference-based approach at HL-LHC of 6.4 × 10 −5 is likely to degrade when including systematic uncertainties.However, we restricted ourselves to only one analysis region with exactly four central b-tagged jets.The inclusion of more signal regions would improve the sensitivity while data-driven background estimations from dedicated control regions could mitigate the impact of systematic uncertainties.Additionally, the inclusion of the electron channel will improve the sensitivity.
For the FCC-hh, the relative sensitivity of the interference-based approach compared to the leptonic analysis improves when compared to the HL-LHC scenario.This highlights the power of the interference-based approach when moving towards the realm of smaller and smaller couplings and the analysis of larger datasets with increasing statistical power.Nevertheless, it should be recognised that the FCC-hh would operate in a regime of very high pileup: the average number of visible interactions per bunch crossing is projected to be µ ∼ O(1000) [59].This poses notable challenges for avour tagging and analyses that focus on jets in general.Because of this, more thorough studies with a dedicated detector simulation would be needed to assess and compare the sensitivity of the two approaches at the FCC-hh.The ultraboosted approach bene ts signi cantly more from the energy gain from 14 TeV to 100 TeV as the limit is estimated to improve by a factor of approximately 46, while the limit from triple-top-quark production is only projected to improve by a factor of around 29.A clear hierarchy can be deduced: The triple top-quark approach only yields an expected limit of the order of 10 −4 , while the ultraboosted approach is expected to perform better by around one order of magnitude.The interference-based approach and the leptonic analysis are both projected to push this even further to O(10 −6 ).
It should also be noted that the Z → and the interference approach have a di erent sensitivity to tZc and tZu FCNC couplings and are hence complementary.The Z → analysis that focuses on the production mode is less sensitive to the tZc than to the tZu coupling due to the di erence in parton distribution functions.Nevertheless, the sensitivities to the two couplings in the production mode are expected to be more similar at FCC-hh due to the evolution of the parton distribution functions considering higher energy scales and the tendency for lower Bjorken x compared to the LHC.In the decay mode, the Z → approach has similar sensitivity to both couplings but relies on charm-quark identi cation for the distinction of these couplings.In contrast, the interference approach is almost exclusively sensitive to the tZc coupling.Thus, in case an excess over the SM prediction is observed in the future, the combination of these approaches will allow to disentangle possible e ects from these two couplings.

Conclusions
Top-quark FCNCs are so highly suppressed within the SM that any observation at the LHC or planned future hadron colliders would constitute a clear signal of physics beyond the SM.At hadron colliders, the traditionally most promising and most employed channel to search for tZq FCNCs uses a trilepton signature, relying on the leptonic Z → + − decay.Since the t → Zq decay rate is quadratically proportional to the FCNC coupling, i.e., ∝ g 2 , the resulting sensitivity to probe g scales as 1/ 4 √ L int with the integrated luminosity L int (assuming systematic uncertainties are small compared to the statistical ones).Given the large datasets expected at the HL-LHC and planned future hadron colliders, we investigated how to improve upon this luminosity scaling with a novel strategy.We propose to target the hadronic, three-body decay t → qb b.In the presence of tZq FCNCs, the decay receives two interfering contributions: one from the FCNC (t → qZ(→ b b)) and one from the SM (t → bW + (→ q b)).Since the two contributions interfere, the three-body rate contains a term linear in the FCNC coupling, i.e., ∝ g.Therefore, for su ciently small g, the sensitivity to probe g scales as 1/ √ L int in this channel, thus more favourably than in the traditional multi-lepton searches.
We studied the leading parametric dependencies controlling the kinematics of t → qb b and identi ed the requirements on the FCNC couplings that would allow leveraging the interference to compete and complement traditional searches.The interference depends on the chirality and the phase of the FCNC coupling.It is largest for a left-handed tZq coupling, while for a right-handed one it is suppressed by the small masses of the bottom and q quark.We have thus focussed on the latter case of left-handed tZq couplings.The interference is active in a small kinematical region in which both the Z and W bosons are "on-shell".In this small doubly-on-shell region, we showed that the parametric dependence on Γ/M is the same for the SM and the interference contribution.Therefore, targeting this doubly-on-shell region with a dedicated search has the potential to provide sensitivity with an improved luminosity scaling.
Based on these ndings, we studied the prospects of the proposed search strategy for the case of left-handed FCNC tZc couplings with constructive interference.We consider the production of t t → cb b µ − ν µ b from tZc FCNCs as the signal process.We simulated this signal and relevant back-ground processes with MadGraph5_aMC@NLO and emulated the detector response by smearing the parton-level objects with resolutions similar to those at the ATLAS and CMS experiments.We then separated the FCNC signal processes from the backgrounds with a deep neural network that is parameterised in the value of the FCNC coupling g.This setup accounts for the varying FCNC-interference contribution to the total FCNC signal.If no signs of FCNC production were found, the resulting expected 95% con dence-level upper limit with the HL-LHC dataset is B excl (t → Zc) = 6.4 × 10 −5 .At the FCC-hh, the expected limit is improved by up to a factor ∼ 50, depending on the assumed detector performance.
While this study did only consider statistical uncertainties, the e ect of systematic uncertainties should be studied in the future.The main backgrounds are t t production with light-quark jets misidenti ed as b-or c-jets and t t production with a W → cb decay.As in most t t measurements, uncertainties in the modelling of the t t process may impact the sensitivity.The same is true for b-tagging and jet-related uncertainties.Heavy-avour-associated t t production is only a minor background and the potentially large associated systematic uncertainties are unlikely to signi cantly a ect the sensitivity.Given the promising signal-background separation of the parameterised deep neural network, the statistical uncertainties on the number of events in the signal-dominated phase space may still compete with the systematic uncertainties in the background contributions.
As the integrated luminosity increases, the advantage of the new strategy over the traditional approach generally becomes more pronounced.At the HL-LHC, the new strategy may not outperform the traditional search based on Z → decays.However, at the FCC-hh, it has the potential to be competitive with the established approach.Nevertheless, given their complementarity, the combination of the two strategies will improve over the traditional search alone at both the HL-LHC and the FCC-hh.Additionally, the new interference-based approach demonstrates excellent prospects compared to several other alternative proposals for top-quark FCNC searches.
Our study focussed on the case in which SM-and NP-sources of CP violation are aligned.It would be intriguing to relax this assumption and design dedicated observables, e.g., asymmetry distributions, that optimally leverage the interference in t → qb b to probe possible CP-violating phases in top-quark FCNC processes.In general, the interference approach will be important to understand the nature of the anomalous coupling in case top-quark FCNCs are observed, as it also provides information on its Lorentz structure.
Given the results of our study on the proposed interference-based approach, it will be interesting to perform an analysis using current LHC data with a consistent treatment of systematic uncertainties and to estimate the sensitivity at the HL-LHC and future hadron-collider experiments under realistic experimental conditions.

A Two-body branching fractions
Resonant W -and Z-boson production (if top FCNCs are present) dominate the inclusive rate for the three-body decay t → cb b via the diagrams in gure 1.As discussed in section 2, these contributions are well described in the narrow-width approximation in terms of inclusive two-body decay rates.Here, we collect the two-body decay rates in Eq. ( 6) that enter the decay t → cb b in the SM and when an anomalous tZc coupling is present:

Figure 1 :
Figure 1: The leading-order diagrams for the three-body decay t → cb b.The left diagram shows the decay via the FCNC tZc coupling and the right the SM decay via a W boson.In the small region of phase space in which the c b-pair reconstructs the W -boson mass and the b b-pair reconstructs the Z-boson mass, both the W and the Z bosons are on-shell and the two amplitudes interfere.

Figure 2 :
Figure 2: In (a) the Dalitz plot for the three-body decay t → cb b in the restframe of the top-quark in terms of the two invariant masses m b b and m c b.In gray the kinematically physical region.The dotted vertical and horizontal line indicates the phase-space points of resonant Zand W -boson production (same in (b) and (c))."Pure SM" events predominantly populate the vertical blue region whereas "pure NP" events the horizontal green region.The red region marks the doubly-on-shell region in which NP-SM interference is the largest.In (b) and (c), we show the rate originating from NP-SM interference proportional to g cos φ and g sin φ, respectively.The gure ranges correspond to the doubly-on-shell region (red region in (a)) and the dotted rectangle centered at the doubly-on-shell point has the width Γ W and the height Γ Z .Brown regions correspond to negative and green to positive contributions to the branching ratio.
validated the custom model by simulating the decay t → cb b and comparing the distribution of events in the two-dimensional plane spanned by the Dalitz variables m 2 c b and m 2 b b (cf.section 2) with the expectation from the explicit calculation ( gure 2).

Figure 3 :
Figure 3: Expected number of events for 3000 fb −1 in the m W ,reco vs. m Z,reco plane (in bins of 2 GeV x 2 GeV) for the representative value g = 0.01 and cos φ = −1: in (a) from the pure FCNC contribution, in (b) from the interference contribution with positive and in (c) with negative event weights, and in (d) from the sum of the background processes.

Figure 4 :
Figure 4: In (a), the expected 95% CL exclusion limit on g calculated on the validation set after each epoch during the training of the DNN.In (b), the CL s value estimated for various values of the coupling constant g and the corresponding ±1σ and ±2σ uncertainty bands.

Figure 5 :
Figure 5: The signal and background distribution of the discriminant for g = 8.8 × 10 −3 and g = 0.02.As the DNN is parameterised in g, the background distribution depends on g as well.The bottom panel shows the ratio of expected signal+background events divided by the number of expected background events, (S + B)/B.

Figure 6 : 2 c b and m 2 b
Figure 6: Number of events for each background process in bins of the discriminant d.The expected number of events in each bin is determined from the nominal expected exclusion limit g = 8.8 × 10 −3 and an integrated luminosity of 3000 fb −1 at HL-LHC.In addition, the shapes of the signal distributions are illustrated.