Search for flavour-changing neutral-current interactions of a top quark and a gluon in pp collisions at TeV with the ATLAS detector

A search is presented for the production of a single top quark via left-handed flavour-changing neutralcurrent (FCNC) interactions of a top quark, a gluon and an up or charm quark. Two production processes are considered: u + g → t and c + g → t . The analysis is based on proton–proton collision data taken at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. The data set corresponds to an integrated luminosity of 139 fb−1. Events with exactly one electron or muon, exactly one b-tagged jet and missing transverse momentum are selected, resembling the decay products of a singly produced top quark. Neural networks based on kinematic variables differentiate between events from the two signal processes and events from background processes. The measured data are consistent with the background-only hypothesis, and limits are set on the production cross-sections of the signal processes: σ(u + g → t) × B(t → Wb) × B(W → ν) < 3.0 pb and σ(c+ g → t)×B(t → Wb)×B(W → ν) < 4.7 pb at the 95% confidence level, with B(W → ν) = 0.325 being the sum of branching ratios of all three leptonic decay modes of the W boson. Based on the framework of an effective field theory, the cross-section limits are translated into limits on the strengths of the tug and tcg couplings occurring in the theory: |C ut uG |/ 2 < 0.057 TeV−2 and |C ct uG |/ 2 < 0.14 TeV−2. These bounds correspond to limits on the branching ratios of FCNC-induced top-quark decays: B(t → u + g) < 0.61 × 10−4 and B(t → c + g) < 3.7 × 10−4.


Introduction
Direct searches for on-shell production of new heavy particles at the Large Hadron Collider (LHC) have not yet been successful. For this reason, indirect searches targeting non-standard couplings among Standard Model (SM) particles attract increasing interest. Among these analyses are searches for flavour-changing neutral-current (FCNC) processes in the top-quark sector. The SM does not contain FCNC processes at tree level, and even though these processes exist at higher orders, they are suppressed due to the Glashow-Iliopoulous-Maiani mechanism [1]. Compared to the b-quark sector, where decays of b-hadrons via FCNCs were first observed in 1995 [2], FCNC decays of top quarks are even more suppressed. Depending on the decay mode, FCNC branching ratios (B) of the top quark are predicted to range from 10 −12 to 10 −17 [3], and are thus well below the experimentally accessible regime, at present and in the foreseeable future. The observation of FCNC top-quark decays or top-quark production via FCNCs would therefore be an unambiguous signal of physics beyond the SM.
Many extensions of the SM predict significantly higher rates for FCNC processes in the top-quark sector. These extensions include new scalar particles introduced in two-Higgs-doublet models [4,5] or in supersymmetry [6][7][8]. In certain regions of the parameter space of these models, the predicted branching ratios of top quarks decaying via FCNC can be as large as 10 −5 to 10 −3 and thus become detectable at the LHC.
Searches for FCNCs involving a top quark and a gluon were performed at the Tevatron [9,10] and in data from Run 1 of the LHC [11][12][13]. Rather than looking for the top-quark decays t → u + g and t → c + g in top-quark-antiquark pair (tt ) production, these analyses searched for the production of a single top quark (t) via the FCNC processes u + g → t (ugt process) and c+g → t (cgt process), exploiting specific kinematic features of single-top-quark production to separate a potential signal from the large W +jets and multijet backgrounds. The analysis presented in this paper extends the Run 1 ATLAS search to the Run 2 data set collected with the ATLAS detector in the years 2015 to 2018, during which the LHC operated at a centre-of-mass energy of 13 TeV. Conceptually, the scope of the analysis is expanded by performing independently optimised searches for the ugt and cgt processes. Differences between these two processes are due to differences in the parton distribution functions (PDFs) for valence and sea quarks. For top antiquarks the charge-conjugate processes are implied. The FCNC interaction is assumed to be left-handed. Another novelty compared to the Run 1 analysis is the interpretation of the results in an effective field theory framework provided by the TopFCNC model [14].
The event selection targets the t → e + νb and t → μ + νb decay modes of the top quark. However, there is also additional but lower acceptance for events with the decay t → τ + νb and the subsequent decay of the τ -lepton into e + ν eντ or μ + ν μντ . A leading-order (LO) Feynman diagram illustrating the signature of the targeted scattering events is shown in Fig. 1.
Considering the signature of the signal events, the required reconstructed objects are exactly one charged-lepton candidate (an electron or a muon) with high transverse momentum ( p T ), exactly one jet which is identified to originate with a high probability from a b-quark, and large missing transverse momentum as an indication of a highp T neutrino.
The main background processes are W + bb production, t-channel single-top-quark (tq) production, tt production and multijet production. Artificial neural networks (NNs) are used to separate signal events from background events. The observed distributions of the NN discriminants are analysed Fig. 1 Leading-order Feynman diagram of non-SM production of a single top quark via the FCNC process u(c) + g → t statistically with a profile maximum-likelihood fit in which all systematic uncertainties are treated as nuisance parameters.
The structure of the paper is as follows. A brief description of the ATLAS detector is given in Sect. 2, followed by a comprehensive summary of the collision data and the samples of simulated events in Sect. 3. Section 4 describes the reconstruction of detector-level objects and the event selection. The modelling of multijet background events and the estimation of their rate is discussed in Sect. 5. Section 6 provides details about the separation of signal and background events using NNs. Systematic uncertainties are outlined in Sect. 7 and the results are presented in Sect. 8. Conclusions are given in Sect. 9.

The ATLAS detector
The ATLAS detector [15] at the LHC covers nearly the entire solid angle around the collision point. 1 It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroidal magnets.
The inner-detector system (ID) is immersed in a 2T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable B-layer installed before Run 2 [16,17]. It is followed by the silicon microstrip tracker, which usually provides eight measurements per track. These silicon detectors are complemented by the transition radiation tracker (TRT), 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the zaxis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Angular distance is measured in units of R ≡ ( η) 2 + ( φ) 2 . which enables radially extended track reconstruction up to |η| = 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold corresponding to transition radiation.
The calorimeter system covers the pseudorapidity range |η| < 4.9. Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and endcap highgranularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadronic calorimetry is provided by the steel/scintillatortile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic measurements respectively.
The muon spectrometer (MS) comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by superconducting air-core toroids. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. A set of precision chambers covers the region |η| < 2.7 with three layers of monitored drift tubes, complemented by cathode-strip chambers in the forward region, where the background is highest. The muon trigger system covers the range |η| < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions. Interesting events are selected to be recorded by the first-level trigger system implemented in custom hardware, followed by selections made by algorithms implemented in software in the high-level trigger [18]. The first-level trigger accepts events from the 40 MHz bunch crossings at a rate below 100 kHz, which the high-level trigger reduces in order to record events to disk at about 1 kHz.
An extensive software suite [19] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Samples of data and simulated events
The analysis uses proton-proton ( pp) collision data recorded with the ATLAS detector in the years 2015 to 2018 at a centre-of-mass energy of 13 TeV. After applying data-quality requirements [20], the data set corresponds to an integrated luminosity of 139 fb −1 with a relative uncertainty of 1.7% [21]. The LUCID-2 detector [22] was used for the primary luminosity measurements. At the high instantaneous luminosity reached at the LHC, events were affected by additional inelastic pp collisions in the same and neighbouring bunch crossings (pile-up). The average number of interactions per bunch crossing was 33.7.
Events were selected online during data taking by singleelectron or single-muon triggers [23,24]. Multiple triggers were used to increase the selection efficiency. The lowestthreshold triggers utilised isolation requirements for reducing the trigger rate. The isolated-lepton triggers had p T thresholds of 20 GeV for muons and 24 GeV for electrons in 2015 data, and 26 GeV for both lepton types in 2016, 2017 and 2018 data. They were complemented by other triggers with higher p T thresholds but no isolation requirements in order to increase the trigger efficiency.
Large sets of simulated events from signal and background processes were produced with event generator programs based on the Monte Carlo (MC) method to model the recorded and selected data. After event generation, the response of the ATLAS detector was simulated using the Geant4 toolkit [25] with a full detector model [26] or a fast simulation [27,28] which employed a parameterisation of the calorimeter response. To account for pile-up effects, minimum-bias interactions were superimposed on the hardscattering events and the resulting events were weighted to reproduce the observed pile-up distribution. The minimumbias events were simulated using Pythia 8.186 [29] with the A3 [30] set of tuned parameters and the NNPDF2.3lo PDF set [31]. Finally, the simulated events were reconstructed using the same software as applied to the collision data. Except for the multijet background, the same event selection requirements were applied and the selected events were passed through the same analysis chain. Small corrections were applied to simulated events such that the lepton trigger and reconstruction efficiencies, jet energy calibration and btagging efficiency were in better agreement with the response observed in data. More details of the simulated event samples are provided in the following subsections.

Samples of simulated events from the ugt and cgt FCNC processes
Simulated events from the ugt and cgt processes were produced with the METOP 1.0 event generator [32,33] at next-to-leading order (NLO) in quantum chromodynamics (QCD). The difference between LO and NLO is very relevant for the analysis since a veto on a second jet is applied in the event selection by requiring exactly one reconstructed jet with p T > 30 GeV. Signal samples generated at NLO predict a higher rate of events with two jets than samples generated at LO, leading to a lower acceptance for signal events due to the jet veto. The Lorentz structure of the vertex coupling was taken to be left-handed. It was verified that the shapes of kinematic distributions are independent of the value of the coupling constant used for the event generation. The top quark was assumed to decay as in the SM and the decay was simulated using MadSpin [34,35]. Only leptonic decays of the W boson originating from top-quark decay were considered, including e ± , μ ± and τ ± leptons. The renormalisation scale μ r and the factorisation scale μ f were set to the topquark mass m t , for which a value of m t = 172.5 GeV was used. The CT10 set of PDFs [36] was used for event generation. Parton showers and the hadronisation were simulated with Pythia 8.235 [37] with the A14 set of tune parameters [38]. In the METOP + Pythia set-up, hard gluon emissions can arise in both the NLO matrix-element generator and the parton-shower generator. The matching between the two generators was achieved by limiting the phase-space region of the first parton-shower emission in a way that depends on the transverse momentum of the top quark. The matching scale between the matrix-element generator and the parton shower was set to 10 GeV. Samples with alternative generator settings were produced to estimate systematic uncertainties. Samples with μ r = μ f = 2 · m t and μ r = μ f = 0.5 · m t were used to evaluate the impact of the scale choice on the signal model. The uncertainty in modelling parton showers was evaluated with METOP signal samples in which parton showers were generated by Herwig7.0.4 [39,40] instead of Pythia . The METOP + Herwig set-up used the same PDF set as the nominal sample, CT10. In addition, METOP + Pythia samples with a different matching scale of 15 GeV were produced to evaluate the uncertainties due to the choice of this scale. All samples of the ugt and cgt processes were passed through the fast detector simulation.

Simulation of tt and SM single-top-quark production
Samples of simulated events from tt and single-top-quark production were generated using the Powheg Boxv2 [41][42][43][44][45][46][47] NLO matrix-element generator, setting m t = 172.5 GeV. For tt and t W production as well as s-channel single-topquark production (tb production) the NNPDF3.0nlo PDF set [48] implementing the five-flavour scheme was used, while t-channel single-top-quark events (tq production) were produced with the NNPDF3.0nlo_nf4 PDF set, which implements the four-flavour scheme, following a recommendation given in Ref. [47]. Parton showers, hadronisation, and the underlying event were modelled using Pythia 8.230 with the A14 set of tuned parameters and the NNPDF2.3lo PDF set. The Powheg Box + Pythia generator set-up applies a matching scheme to the modelling of hard emissions in the two programs. The matrix-element-to-parton-shower matching is steered by the h damp parameter, which controls the p T of the first additional gluon emission beyond the LO Feynman diagram in the parton shower and therefore regulates the highp T emission against which the tt system recoils. Event generation was run with h damp = 1.5 × m t [49]. The renormalisation and factorisation scales were set dynamically on an event-by-event basis, namely to μ r = μ f = m 2 t + p 2 T (t) for tt production and to μ r = μ f = 4 m 2 b + p 2 T (b) for tq production, with p T (t) being the p T of the top quark and p T (b) being the p T of the b-quark originating from the initialstate gluon, splitting into a bb pair. The scale choice for tq production followed a recommendation of Ref. [47]. When generating t W events, the diagram-removal scheme [50] was employed to handle the interference with tt production [49].
In the case of tt production, top-quark decays were handled by Powheg Boxdirectly, while in the case of singletop-quark production, top-quark decays were modelled by MadSpin . The decays of bottom and charm hadrons were simulated using the EvtGen 1.6.0 program [51] for all samples involving top-quark production.

Simulation of W +jets and Z +jets production
The production of W bosons and Z bosons in association with jets, including heavy-flavour jets in particular, was simulated with the Sherpa 2.2.1 generator [57]. In this set-up, NLO-accurate matrix elements for up to two partons and LO-accurate matrix elements for up to four partons were calculated with the Comix [58] and OpenLoops1 [59][60][61] libraries. The default Sherpa parton shower [62] based on Catani-Seymour dipole factorisation and the cluster hadronisation model [63] were used. The generation employed the dedicated set of tuned parameters developed by the Sherpa authors and the NNPDF3.0nlo PDF set.
The NLO matrix elements of a given jet multiplicity were matched to the parton shower using a colour-exact variant of the MC@NLO algorithm [64]. Different jet multiplicities were then merged into an inclusive sample using an improved CKKW matching procedure [65,66] which was extended to NLO accuracy using the MEPS@NLO prescription [67]. The merging threshold was set to 20 GeV. The W +jets and Z +jets samples were normalised to NNLO predictions [68] of the total cross-sections, obtained with the FEWZ package [69].

Simulation of diboson and multijet production
Samples of on-shell diboson production (W W , W Z and Z Z) were also simulated with the Sherpa 2.2.1 generator. Motivated by the targeted signature of the signal events, only semileptonic final states were produced, in which one boson decayed leptonically and the other hadronically. The considered matrix elements contain all diagrams with four electroweak vertices and they were calculated at NLO accuracy in QCD for up to one additional parton and at LO accuracy for up to three additional parton emissions. The matching of NLO matrix elements to the parton shower and the merging of different jet multiplicities was done in the same way as for W /Z +jets production. Virtual QCD corrections were provided by the OpenLoops1 library. The NNPDF3.0nlo PDF set was used along with the dedicated set of tuned parameters developed by the Sherpa authors. The diboson event samples were normalised to the total cross-sections provided by Sherpa at NLO in QCD.
Events featuring generic highp T multijet production may pass the event selection if a jet is misidentified as an electron or muon, or if real electrons or muons coming from hadron decays inside the jets pass the isolation requirements. The former are called fake leptons, the latter non-prompt leptons. In addition, non-prompt electrons occur as a result of photon conversions in the detector material. Multijet events with fake electrons or non-prompt electrons were modelled with a sample of simulated dijet events, while events with non-prompt muons were modelled with collision data. The number of events with fake muons is negligible. The dijet event sample was generated using Pythia 8.186 with LO matrix elements for dijet production and interfaced to a p Tordered parton shower. The scales μ r and μ f were set to the square root of the geometric mean of the squared transverse masses of the two outgoing particles in the matrix element, μ r = μ f = 4 ( p 2 T,1 + m 2 1 )( p 2 T,2 + m 2 2 ). At generator level, a filter was applied which required the existence of one particle-level jet with p T > 17 GeV. The generation used the NNPDF2.3lo PDF set and the A14 set of tuned parameters. The generated sample of dijet events was used to model the event kinematics and to produce template distributions in the electron channel, while the rate of the multijet background was estimated in a data-driven way as described in Sect. 5.

Object reconstruction and event selection
The hard-scattering process was reconstructed by identifying the particles occurring at parton level with objects which were reconstructed at detector level, such as electron and muon candidates and hadronic jets. The presence of highp T neutrinos is signalled by high missing transverse momentum.

Object definitions
Events were required to have at least one vertex reconstructed from at least two ID tracks with transverse momenta of p T > 0.5 GeV. The primary vertex of an event was defined as the vertex with the highest sum of p 2 T over all associated ID tracks [70].
Electron candidates were reconstructed from clusters of energy deposited in the electromagnetic calorimeter with a matched track reconstructed in the ID [71]. The pseudorapidity of clusters, η cluster , was required to be in the range |η cluster | < 2.47. However, clusters were excluded if they are in the transition region 1.37 < |η cluster | < 1.52 between the central and the endcap electromagnetic calorimeters. Electron candidates had to have p T > 10 GeV. A likelihoodbased method was used to simultaneously evaluate several properties of electron candidates, including shower shapes in the electromagnetic calorimeter, track quality, and detection of transition radiation produced in the TRT. Two categories of electrons with different quality were defined [71]: the first category implemented Tight identification criteria and featured a high rejection of non-prompt or fake electrons, while the second category with Loose criteria had higher efficiency at the price of lower purity in prompt electrons. Electrons from decays of weak gauge bosons pass the Tight criteria with an average efficiency of 80% and the Loose criteria with 93%.
Muon candidates were reconstructed by combining tracks in the MS with tracks in the ID [72]. The tracks had to be in the range of |η| < 2.5 and have p T > 10 GeV. Similarly to electrons, two levels of identification criteria were applied, defining Medium and Loose quality categories of muon candidates. Muons orginating from W bosons in tt events with p T > 10 GeV pass the Medium quality criteria with an efficiency of 97% and the Loose criteria with 99%.
The tracks matched to electron and muon candidates had to point to the primary vertex, which was ensured by requirements imposed on the transverse impact-parameter significance, |d 0 /σ (d 0 )| < 5.0 for electrons and |d 0 /σ (d 0 )| < 3.0 for muons, and the longitudinal impact parameter, |z 0 sin(θ )| < 0.5 mm for both lepton flavours. Isolated Tight electrons and Medium muons were selected by requiring the amount of energy in nearby energy depositions in the calorimeters and the scalar sum of the transverse momenta of nearby tracks in the ID to be small. Isolation requirements were not imposed on electrons and muons of Loose quality. Scale factors were used to correct the efficiencies in simulation in order to match the efficiencies measured for the electron [71] and muon [24] triggers, and the reconstruction, identification and isolation criteria.
Jets were reconstructed from topological clusters [73,74] in the calorimeters with the anti-k t algorithm [75] using Fast-Jet [76] and a radius parameter of 0.4. Their energy was calibrated [77], and they had to fulfil p T > 20 GeV and |η| < 4.5. Jets with p T < 120 GeV and |η| < 2.5 were required to pass a requirement on the jet-vertex-tagger (JVT) discriminant [78] to suppress jets originating from pile-up collisions. The JVT-discriminant was required to be above 0.59, which corresponds to an efficiency of 92% for non-pileup jets. Similarly, a forward-JVT (fJVT) requirement was used for jets with p T < 60 GeV and 2.5 < |η| < 4.5 [79]. Differences in the efficiencies of the JVT and fJVT requirements between collision data and simulation were accounted for by corresponding scale factors.
Jets containing b-hadrons were identified (b-tagged) with the MV2c10 algorithm [80], which used boosted decision tree discriminants with several b-tagging algorithms as inputs [81]. The algorithms exploited the impact parameters of charged-particle tracks, the properties of reconstructed secondary vertices and the topology of b-and c-hadron decays inside the jets. In order to strongly reduce the misidentification rate of c-jets and light-flavour (u, d or s)/gluon jets, a specific working point of the MV2c10 algorithm was defined and calibrated, using the standard calibration technique [80]. With this working point, the b-tagging efficiency for jets that originate from the hadronisation of b-quarks is 30% in simulated tt events. The b-tagging rejection 2 for jets that originate from the hadronisation of c-quarks (u-, d-, squarks or gluons) is 900 (30,000). By using the high-purity b-tagging working point with 30% efficiency for b-jets the analysis performance was considerably improved in comparison to an analysis based on the tightest standard working point which features a tagging efficiency of 60% for b-jets. The improvement is mainly due to a reduced impact of the W +jets background, including uncertainties in mistagging cquark jets, light-flavour jets and gluon jets in W +jets production. Differences in b-tagging efficiency between simulated and collision events were corrected for by applying a p Tdependent scale factor to simulated events. The scale factor ranges from 0.96±0.04 in the interval 30 < p T (b) ≤ 40 GeV to 1.01 ± 0.02 for 140 < p T (b) < 175 GeV, which is the highest calibration interval relevant for this analysis. The btagging scale factors were obtained by comparing samples of collision data strongly enriched in tt events with samples of simulated events generated by Powheg+Pythia 8.230. The obtained scale factors depend on the parton-shower generator used to produce the tt samples. When using samples with a different parton-shower generator, for example Sherpa to model W + jets events, or when evaluating systematic uncertainties with a set-up based on Herwig, additional correction factors called MC-to-MC scale factors were applied.
To avoid double-counting objects satisfying more than one selection criterion, a procedure called overlap removal was applied. Reconstructed objects defined with Loose quality 2 The rejection is defined as the inverse of the efficiency. criteria were removed in the following order: electrons sharing an ID track with a muon; jets within R = 0.2 of an electron, thereby avoiding double-counting electron energy deposits as jets; electrons within R = 0.4 of a remaining jet, for reducing the impact of non-prompt electrons; jets within R = 0.2 of a muon if they have two or fewer associated tracks; muons within R = 0.4 of a remaining jet, reducing the rate of non-prompt muons. The Tight and Medium criteria were applied to those objects which survived overlap removal.
The missing transverse momentum p miss T was reconstructed as the negative vector sum of the p T of the reconstructed leptons and jets, as well as ID tracks that pointed to the primary vertex but were not associated with a reconstructed object [82]. The magnitude of p miss T is denoted by E miss T .

Basic event selection
To be selected, events were required to have exactly one electron of Tight quality or exactly one muon of Medium quality, both with p T > 27 GeV. The charged lepton was required to match the object which triggered the event. To reduce contributions from tt events in the dilepton decay channel, any event with an additional lepton satisfying the Loose quality conditions with p T > 10 Gev was rejected (dilepton veto). Multijet events containing fake or non-prompt leptons tend to have, in contrast to events with prompt leptons from W and Z decays, low E miss T and low W transverse mass, which is defined as (1) To reduce the multijet background, E miss T > 30 GeV and m T (W ) > 50 Gev were applied as additional selection requirements.
At least one jet with p T > 30 GeV was required. In order to even further suppress the multijet background and to remove poorly reconstructed leptons with low p T , the event selection applied an additional requirement based on the azimuthal angle between the primary lepton ( ) and the leading jet ( j 1 ), i.e. the jet with the largest p T . This quantity is denoted by φ ( j 1 , ). The imposed requirement was which led to a tighter p T requirement on the charged lepton if the leading jet and the lepton had a back-to-back topology, namely if | φ( j 1 , )| > 0.687π . For the maximum separation | φ( j 1 , )| = π between the two objects, p T ( ) > 50 Gev had to be satisfied.

Definition of signal and validation regions
A signal region (SR) and three validation regions (VRs) were defined by applying further requirements to the sample of events passing the basic selection. Only events in the SR were used at a later stage of the analysis for a profile-likelihood fit to the data in the search for a signal contribution, while the VRs were used to validate the modelling of different background contributions. A summary of the selection requirements used to define the four analysis regions is given in Table 1.
All requirements mentioned before are common to all regions considered. The SR was defined by narrowing the jet requirement relative to the basic event selection. Each event had to have exactly one jet with p T > 30 GeV and |η| < 2.5, i.e. events with additional central jets were vetoed. This single jet had to be b-tagged. The selection efficiency for signal events in which the top quark decays into W b and the resulting W boson decays leptonically was 1.36% for ugt events and 2.30% for cgt events. For the ugt search, the SR was split according to the sign of the charge of the primary lepton sgnq( ). Two NN discriminants D 1 and D 2 , described in Sect. 6, were formed to separate signal and background events in these three SRs.
The first VR was defined for validating the modelling of the events kinematics of W + jets production (W + jetsVR) by the Sherpa 2.2.1 generator. To suppress top-quark backgrounds a less stringent b-tagging requirement was used. Exactly one jet with p T > 30 GeV was required to be b-tagged at a working point with an efficiency of 60%. All other selection requirements were the same as for the SR. However, events in the SR were vetoed. The modified btagging requirement leads to a different flavour composition of the jets in the W + jetsVR compared to the SR; the components of W +c-jets and W +light-flavour jets are increased relative to W +b-jets. To enrich the region further in W + jets events and reduce the number of signal events, the NN discriminant D 1 , specified in Sect. 6, was required to be in the range 0.3 < D 1 < 0.6. The modelling of events with positive lepton charge was separately checked by requiring the NN discriminant D 2 to be in the range 0.3 < D 2 < 0.6, defining the + W + jetsVR. When normalising the FCNC processes to the observed limits from the previous ATLAS results obtained at a centre-of-mass energy of 8 TeV, the FCNC signal contamination is 1.2% in the W + jetsVR and 0.9% in the + W + jetsVR.
The second VR was enriched in tt events by selecting events with exactly two b-tagged jets using the 30% btagging working point (tt VR). When normalising the FCNC processes to the observed limits from the previous ATLAS results obtained at a centre-of-mass energy of 8 TeV, the FCNC signal contamination is at a very low level of a few times 10 −4 . The third VR checked the modelling of tq events (tqVR). Events with exactly two jets were required. Exactly one of the jets had to be b-tagged at the 30% efficiency working point, while the second jet was required to be in the forward region with |η| > 2.5, which is a characteristic feature of tq events. Thus, the tqVR was a subset of the SR, since

Observable
Common requirements there was no condition on jets in the forward region when defining the SR. To further enhance the fraction of tq events and to suppress signal events, the NN discriminant D 1 was required to be in the range 0.2 < D 1 < 0.4. The modelling of events with positive lepton charge was separately checked by requiring the NN discriminant D 2 to be in the range 0.2 < D 2 < 0.4, defining the + tqVR. When normalising the FCNC processes to the observed limits from the previous ATLAS results obtained at a centre-of-mass energy of 8 TeV, the FCNC signal contamination is 1.2% in the tqVR (cgt analysis) and 0.8% in the + tqVR.

Estimation of the multijet background
By requiring electron and muon candidates to be isolated, the object definition and the event selection strongly favour prompt leptons originating from decays of W bosons or Z bosons. However, there is a small probability for nonprompt electrons or muons occurring in hadron decays, either directly or through the decay of a τ -lepton, to be reconstructed as isolated leptons. The main source is b-hadron decays in jets, but c-hadrons and long-lived weakly decaying states such as π ± and K mesons also contribute. In addition, prompt electrons are mimicked by fake electrons which arise from the misidentification of direct photons, photons from π 0 decays, or bremsstrahlung and photon conversions. Even though the probabilities of misidentification are relatively low, some multijet events still pass the selection and contribute to the background, since their production crosssection is approximately three orders of magnitude higher than the cross-sections of top-quark production processes. As the mechanisms of misidentification are not well modelled by the detector simulation, the rate of the multijet background was determined in a data-driven way by fitting the E miss T distribution for events with an electron (electron channel) and the m T (W ) distribution for events with a muon (muon channel).
In the electron channel, the multijet background was modelled using the jet-electron method [83]. Simulated events from dijet production (see Sect. 3.4 for a description of the sample) were selected if they contained a jet depositing a large fraction (>80%) of its energy in the electromagnetic calorimeter. This jet was classified as an electron, the jetelectron, and treated in the subsequent steps of the analysis in the same way as a properly identified prompt electron. The jet-electrons had to pass the nominal p T and |η| requirements, but electron identification requirements were not applied. Since the relative numbers of electrons detected in the barrel (|η| < 1.37) and endcap (|η| > 1.52) sections of the electromagnetic calorimeter were not modelled well enough by the sample of simulated dijet events, the electron channel was divided into two subchannels: a barrel-electron channel and an endcap-electron channel.
In the muon channel, multijet events were modelled with collision events highly enriched in non-prompt muons [83]. Starting from the same sample of collision events as the nominal selection, a subset of events enriched in non-prompt muons was obtained by inverting or modifying some of the muon isolation requirements, such that the resulting sample did not overlap with the nominal sample. The kinematic requirements on muon p T and |η| remained the same as for the nominal selection.
The rate of the multijet background was normalised by performing a binned maximum-likelihood fit to the E miss T and m T (W ) distributions observed in the electron and muon channels, respectively. All selection criteria were applied, except for the E miss T requirement in the electron channels (barrel and endcap) and the requirement on m T (W ) in the muon channel. The three channels were further split according to the sign of the charge of the primary lepton sgnq( ), leading to six channels per analysis region. Separate fits were performed for the SR and the three VRs. In each region, all six channels were fit simultaneously. Since the multijet background is expected to be independent of lepton charge, its rates in the + and the − channels were assumed to be the same. On the other hand, the rates of some of the other background processes, i.e. tq, tb and W + jets production, are different in the + and the − channels due to the PDFs. For the purpose of these fits, scattering processes other than multijet production were grouped in the following way: (1) top-quark production comprises tt production and all three single-top-quark production processes (tq, tb and t W production), (2) W + jets production, including the production of light-quark, gluon, b-quark and c-quark jets in association with a W boson, and (3) Z +jets and diboson production (W W , W Z and Z Z production). The templates of the fit distributions for these three groups of processes were derived from simulated events and the rates were normalised to the theory predictions reported in Sect. 3. As the shapes of the distributions for Z +jets and diboson production are very similar to those of W + jets production, the rates of Z +jets and diboson production were fixed in the fitting process to the values predicted by simulation. Uncertainties in the normalisation of top-quark production and W + jets production were accounted for by Gaussian constraints on the normalisation factors of these groups of processes. In the W + jetsVR, only the rate of W +jets production was varied, while the top-quark background was fixed. Similarly, in the tt VR and tqVR only the rate of top-quark production was varied, while the rate of W + jets production was fixed. In the SR, both rates were free to vary within uncertainties.
The fits yielded estimates of the rates of the multijet background in the four analysis regions before applying  The stacked histograms were normalised to the fit result. The low E miss T and m T (W ) regions drove the estimate of the multijet background, since its fraction of the total yield was larger there than at higher values of the two observables. The yield of the multijet background after applying the requirements of E miss T > 30 GeV and m T (W ) > 50 GeV is based on the normalised histograms of the multijet background normalised to the fit result and was later used as a starting value for the profile-likelihood fit in the final statistical analysis. The normalisation factors obtained for top-quark production and W + jets production were applied to normalise the respective backgrounds when validating the modelling of kinematic distributions prior to the statistical analysis of the NN discriminants, but they were not used in the statistical analysis itself.
All backgrounds other than the multijet background were modelled by simulated events and the event rate was estimated by scaling the samples of simulated events to the integrated luminosity of the sample of collision data being analysed. The event kinematics of the multijet background is described with the jet-electron model and with non-prompt muon events, normalising the rate of the multijet background to the results of the fits to the E miss T and m T (W ) distributions. Figure 3 provides a summary of the fractional contributions of the different background processes to the expected event yield in the SR.   Table 3 The three largest backgrounds are W + jets production, the combined tt -t W -tb background, and tq production.

Neural networks separating signal and background events
Two NNs were employed to enhance the separation of signal events from background events by combining several kinematic (input) variables to form two discriminants named D 1 and D 2 . The kinematics of signal events depends on whether the quark (antiquark) in the initial state is a valence quark or a sea quark (antiquark). Sea quarks (antiquarks) and valence quarks of the proton carry, on average, different fractions x of the proton momentum and this difference leads to different rapidity distributions for the corresponding produced top quarks (antiquarks) and their decay prod-  ucts. Top quarks produced in the u + g → t process tend to have higher absolute rapidity values than top antiquarks produced in theū + g →t process and top quarks or top antiquarks produced in the c + g → t andc + g →t processes. The two discriminants D 1 and D 2 exploit these differences. The first network was trained only with events from the cgt process and was thus optimised for events featuring a sea quark or antiquark in the initial state. The discriminant obtained from this network is defined to be D 1 . The second NN was trained with events from top-quark production via the ugt process as signal, excluding the charge-conjugate process of top-antiquark production. The corresponding discriminant is called D 2 . The discriminant D 1 is used in a search for the cgt process. The second analysis searches for the ugt process and makes use of both discriminants, D 1 and D 2 . In this case, two SRs were defined based on sgnq( ). The discriminant D 1 was used in the − channel targeting top-antiquark production (ū + g →t ). The discriminant D 2 was computed in the + channel, aimed at the search for direct top-quark production (u + g → t).
The NNs were implemented using the NeuroBayes package [84,85], which combines a three-layer feed-forward NN with a complex and robust preprocessing of the input variables before they are presented to the NN. The training of the NNs was based on generated signal and background events and used back-propagation to determine the weights of connections among nodes. As a non-linear activation function, NeuroBayes uses the symmetric sigmoid function In the region close to zero, the sigmoid function has a linear response. The D 1 and D 2 discriminants were obtained by linearly scaling the outputs of the corresponding NNs to the interval (0, 1).
Sets of input variables were selected based on studies considering the sensitivity of the analyses as given by the expected upper limits on the production cross-sections (Sect. 8 provides more details about the computation of upper limits), how well the observed distributions of the input variables are modelled by simulation, and the ranking of the input variables provided by the preprocessing step of NeuroBayes. The D 1 NN used 12 input variables, the D 2 NN nine. Six of those variables were common to both NNs. Table 2 provides the list of input variables.
Some of the variables, for example R(W, b) and m( νb), required reconstruction of the leptonically decaying W boson, which in turn required reconstruction of the neutrino momentum. While the x-and y-components of the neutrino momentum, p x (ν) and p y (ν), were approximated by the components of p miss T , the z-component, p z (ν), was determined by constraining the mass of the reconstructed W boson to match the measured world average. If the resulting quadratic equation had two real solutions, the one with the smallest | p z (ν)| was chosen. In the case of complex solutions, which could occur due to the limited E miss and p y such that the imaginary part vanished and at the same time the transverse components of the neutrino momentum were kept as close as possible to the p miss T . The W boson was formed by adding the four-vectors of the reconstructed neutrino and the charged lepton.
NeuroBayes uses Bayesian regularisation techniques for the training process to improve the generalisation performance and to avoid overtraining. In general, the network infrastructure consists of one input node for each input variable plus one bias node, an arbitrary, user-defined number of hidden nodes arranged in a single hidden layer, and one output node which gives a continuous output in the interval (−1, +1). For the two NNs of this analysis, 15 nodes were used in the hidden layer and the ratio of signal to background events in the training was chosen to be 1:1. The different background processes were weighted according to their expected number of events. Only tt , W +jets and single-topquark events were used as background processes in the training. The multijet background was not used, since its modelling has considerable uncertainties and attempting to optimise the separation of this background from signal events would likely make the results of the analysis more sensitive to any mismodelling of the kinematics of multijet production. After the training step, samples of simulated signal and background events as well as the observed events were processed by the NNs. The resulting distributions of D 1 and D 2 normalised to unit area are shown in Fig. 4.
The signal distributions peak at high values between 0.8 and 0.9, while the distributions of the background processes peak at low values. Compared to the tt process, which has a low event fraction in the highest bins, the tq and W +jets production processes have higher event fractions in the most signal-like bins.
Prior to the application of the NNs to the observed collision data in the SR, the modelling of the input variables was checked. The corresponding distributions in the VRs were validated as well. The normalisation of the different scattering processes in the grouping reported in Fig. 3 was taken from the fits to the E miss T and m T (W ) distributions for the estimation of the multijet background, reported in Sect. 5. As an additional check, the trained NNs were applied in the VRs using input variables corresponding to those in the SR. Three examples of discriminant distributions in the VRs are presented in Fig. 5. In all cases, the model describes the observed discriminant distributions within the estimated uncertainties.

Systematic uncertainties
Several sources of systematic uncertainty affect the expected event yield from signal and background processes as well as the shape of the NN discriminants used in the maximumlikelihood fit. The systematic uncertainties are divided into two major categories. Experimental uncertainties are associated with the reconstruction of the four-momenta of finalstate partonic objects: electrons, muons, b-jets, and E miss T as an indication of a primary neutrino. The second category of uncertainties is related to the modelling of scattering processes with event generators. In the following, the estimation of experimental and modelling uncertainties is explained in more detail.

Experimental uncertainties
The uncertainty in the integrated luminosity of the combined 2015-2018 data set is 1.7% and is based on a calibration of the luminosity scale using x-y beam-separation scans [21]. The luminosity uncertainty was applied to the signal and background event yields except for the multijet background, which was estimated in a data-driven way. Scale factors were applied to simulated events to correct for reconstruction, identification, isolation and trigger performance differences between data and detector simulation for electrons and muons. These scale factors, as well as the lepton momentum scale and resolution, were assessed using Z → + − events in simulation and data [71,72]. Their systematic uncertainties were propagated to the expected event yields and discriminant distributions used in the maximumlikelihood fit.
The jet energy scale (JES) was calibrated using a combination of test-beam data, simulation and in situ techniques [77]. Its uncertainty is decomposed into a set of 30 uncorrelated components, of which 29 are non-zero in a given event depending on the type of simulation used. Sources of uncertainty contributing to the JES uncertainties include pile-up modelling, jet flavour composition, single-particle response and effects of jets not fully contained within the calorimeter. The uncertainty of the jet energy resolution (JER) is represented by eight components accounting for jetp T and η-dependent differences between simulation and data [86]. The uncertainty in the efficiency to pass the JVT requirement for pile-up suppression was also considered [78].
The uncertainties in the b-tagging calibration were determined for b-jets [80], broken down into 45 orthogonal components. The uncertainties depend on the p T of the b-jets and were propagated through the analysis as weights. Since b-jets were identified with very high purity, the misidentification rate of c-jets and light-flavour jets was very low and a dedicated calibration was not performed. Only the W +jets background has a small component of misidentified c-jets and light-flavour jets. For other backgrounds and for the signal processes these components are negligible. Since the rate of the W +jets background was determined directly from the final maximum-likelihood fit, there was no need for an overall rate uncertainty on the W +jets background. Instead a Fraction of events / 0.025   dedicated shape uncertainty was assigned to the modelling of the contamination by c-jets and light-flavour jets. More details are given in the next section on modelling uncertainties.
The uncertainty in E miss T due to a possible miscalibration of its soft-track component was derived from data-simulation comparisons of the p T balance between the hard and soft E miss T components [82]. To account for pile-up distribution differences between simulation and data, the pile-up profile in the simulation was corrected to match the one in data. The uncertainty associated with the correction factor was applied.

Modelling uncertainties
Uncertainties in the theoretical cross-sections were evaluated for the SM top-quark processes (tq, tt , t W and tb ) as quoted in Sect. 3.2. The single largest background, W +jets production, was allowed to float in the likelihood fit and thus a cross-section uncertainty was not applied. A symmetric uncertainty of ±20% was assigned to the Z +jets production cross-section by evaluating the effect of seven variations of μ r and μ f in the matrix-element computation [87]. In this estimate, which is meant to account for missing higher-order corrections, the scales were independently varied by factors of 0.5 and 2.0, avoiding the variations with ratios of four between the two scales. The biggest impact on the crosssections was found for a correlated variation of μ r and μ f . The same uncertainty of ±20% was assigned to diboson production. The uncertainty in the event yield of the multijet background is 30%.
Uncertainties in modelling parton showers and hadronisation were assigned to the FCNC signal and the SM topquark production processes (tt , t W , tq and tb production) by comparing the nominal samples with alternative samples for which METOP and Powheg Boxwere interfaced to Her-wig7.0.4 instead of Pythia 8.235 or Pythia 8.230, respectively. When generating parton showers the MMHT2014lo [88] PDF set was used as well as the H7-UE-MMHT [40] set of tuned parameters. The uncertainties were defined independently for each scattering process, namely the FCNC signal process and the four SM top-quark production processes. In addition, normalisation and shape effects were decorrelated as well.
Uncertainties related to the choice of renormalisation and factorisation scales for the matrix-element calculations were evaluated by varying the scales in a correlated way by factors of 2 and 0.5, separately for each process. In the case of the FCNC signal processes, dedicated samples of simulated events were generated with varied scales. For the SM topquark production processes and for W + jets production, the scale variations were implemented as generator weights in the nominal sample. These weights were propagated through the entire analysis.
The uncertainty due to the choice of a scale for matching the matrix-element calculation of the tt process to the parton shower was estimated using an additional tt sample produced with the h damp parameter set to 3 × m t , while keeping all other generator settings the same as for the nominal sample of tt events. The uncertainty due to the choice of matrix-element-to-parton-shower matching scale used in the generation of the FCNC signal samples was evaluated by comparisons with alternative samples produced with a matching scale of 15 GeV instead of the 10 GeV scale used for the nominal sample. The uncertainty related to the specific algorithm for matching the NLO-matrix-element computation to parton showers was evaluated for the SM topquark production processes (tq, tt , t W and tb ) by comparing samples generated by Powheg Boxwith samples generated by MadGraph5_aMC@NLO [89]. Both set-ups used Pythia for the parton-shower computation. The effects of this matching-algorithm uncertainty on the shape of the NN discriminants and on the event yields were decorrelated in the maximum-likelihood fit.
Uncertainties in the amount of initial-state and finalstate radiation were assessed for the FCNC signal processes and the SM top-quark production processes by varying the parameter Var3c of the A14 parton-shower tune within the uncertainties of the tune and, for final-state radiation, by varying the renormalisation scale μ r , at which the strong coupling constant α s was evaluated, by factors of 0.5 and 2.0. The two variations, the one of Var3c and the one of μ r , were handled independently. The uncertainty due to the scheme for removing the overlap of the t W process with tt production was evaluated by comparing the nominal sample, using the diagram-removal scheme, with a sample produced with an alternative scheme (diagram subtraction) [50]. In all uncertainty evaluations mentioned above the alternative samples or reweighted samples were normalised to the total crosssection of the nominal samples.
Uncertainties due to PDFs were evaluated for the tq process and the combined tt -t W -tb process using the PDF4LHC15 combined PDF set [90] with 30 symmetric eigenvectors. Samples of simulated events were reweighted to the central value and the eigenvectors of the combined PDF set. Systematically varied templates were constructed by taking the differences between the samples reweighted to the central value and those reweighted to the eigenvectors. In the likelihood fit the PDF uncertainties were treated as correlated between the tq process and the combined ttt W -tb process. The uncertainty in the average number of interactions per bunch crossing was accounted for by varying accordingly the scale factors applied to weight the simulated events in order to obtain the pile-up distribution observed in collision data.
The uncertainty in the multijet background was evaluated by modifying the respective selection criteria for the jetlepton and the non-prompt-muon candidate. For each lepton type, two alternative selections were defined by varying the requirements on the energy fraction measured in the electromagnetic calorimeter in the case of the jet-lepton and by varying the isolation criteria for the muon candidates. The variations leading to the larger deviations from the nominal set-up were chosen when defining uncertainties in the shape of the NN discriminant distribution for the multijet background.
With a fraction of 92% the W +b-jets component dominates the W +jets background. Since the number of simulated events with jets of different flavour, c-jets or lightflavour jets, was very limited, the W +jets template was based on the W +b-jets component only. The expected event yield was scaled such that the events with jets of different flavour were also considered. To account for small shape differences between the NN-discriminant distributions for W +b-jets, W +c-jets and W +light-flavour jets, two alternative W +jets template histograms were created by adding to the nominal W +b-jets component the W +c-jets and W +light-flavour jets contributions with three times the expected rate. The resulting shape differences were applied in a symmetric way in the maximum-likelihood fit, which constrained the input uncer-tainties to a level of 80% for W +c-jets in both searches (ugt and cgt) and 40% (70%) for W +light-flavour jets in the ugt analysis (cgt analysis). In the cgt analysis using the D 1 discriminant, the W +c-jets input uncertainty is approximately ±9% for NN discriminant values below 0.7 and rises to ±12% at high values. The W +light-flavour jets input uncertainty is approximately ±4% for NN discriminant values of 0.75 to 1.0, while at values below 0.1 it reaches ±10%. In the ugt analysis, the corresponding uncertainties in the shapes of the D 1 and D 2 discriminants have very similar features.
The uncertainties due to the finite number of simulated events, also called the MC statistical uncertainty, was accounted for by adding a nuisance parameter for each bin of the NN discriminant distributions separately for each scattering process, implementing the Barlow-Beeston approach [91].

Results
The observed distributions of the NN discriminants were subjected to a binned maximum-likelihood fit, probing for a potential FCNC signal. Two analyses were performed, searching separately for the ugt and cgt FCNC processes. The likelihood function L was constructed as a product of Poisson probability terms over all bins of the NN discriminants. The function L(μ, θ) depends on the signal-strength parameter μ, a multiplicative factor to the signal production cross-section used to normalise the simulated signal samples, and θ, a set of nuisance parameters including the effects of systematic uncertainties on the signal and background expectations. The range of each nuisance parameter was constrained in the likelihood function by a Gaussian term. The signal strength of the W +jets process, μ(W j), was treated as a free multiplicative factor as well. In the ugt analysis, in contrast to the cgt analysis, the rates of the W + +jets and W − +jets processes were determined separately in a simultaneous fit using two independent normalisation parameters.
Systematically varied discriminant distributions were smoothed and nuisance parameters of systematic uncertainties with negligible impact were entirely removed in order to reduce spurious effects in minimisation, improve convergence of the fit, and reduce the computing time. Normalisation and shape effects of a source of systematic uncertainty were treated separately in the pruning process.
Single-sided systematic variations were turned into symmetric variations by taking the full difference in event yield and shape between the nominal model and the alternative model and mirroring this difference in the opposite direction. For sources with two variations, their effects were made symmetric by using the average deviation from the nominal prediction.

Results of the profile likelihood fit
The results of the maximum-likelihood fits yielded: μ(ugt) = 0.10 ± 0.18, 3 μ(W + j) = 1.25 ± 0.15 and μ(W − j) = 1.32 ± 0.17 as well as μ(cgt) = 0.15 ± 0.17 and μ(W j) = 1.19 ± 0.15. The normalisation factors of the W +jets process were determined to be above 1 in all cases. This finding is similar to that from a dedicated measurement of the cross-section of the Z + ≥1 b-jet process [92]. No significant nuisance parameters pulls were observed in either fit. The impact of systematic uncertainties on the sensitivity is much larger than that of the data statistical uncertainties. An importance ranking of systematic uncertainties was determined by computing the shifts μ j in the signal-strength parameters μ(ugt) and μ(cgt) when fixing a particular nuisance θ j related to the uncertainty j to its estimated valueθ j plus or minus its post-fit uncertainty θ j . In the ranking, the uncertainties are sorted in | μ j | in descending order.
The five leading systematic uncertainties in the ugt fit are due to the MC statistical uncertainty in the highest bin of the NN discriminant D 2 of the W +jets process, the W +c-jets shape uncertainty, the first effective nuisance parameter of the uncertainty in the jet-energy resolution, the MC statistical uncertainty in the second-highest bin of the NN discriminant D 2 of the W +jets process, and the normalisation component of the uncertainty in the matrix-element-matching algorithm of the tq process. Out of these leading uncertainties, the three non-MC-statistical uncertainties were constrained in the fit to the range of 80% to 90% of their original value. The five leading systematic uncertainties in the cgt fit are due to the modelling of the parton shower of the FCNC cgt process, the shape component of the parton-shower uncertainty of the tq process, the uncertainty in the resolution of the softtrack term of the E miss T computation, the shape component of the uncertainty in the matrix-element-matching algorithm of the tq process, and the MC statistical uncertainty in the highest bin of the NN discriminant D 1 of the W +jets process. Out of these leading uncertainties of the cgt analysis, the fit constrained the three non-MC-statistical uncertainties to the range of 65% to 90% of their original value. Table 3 provides the expected, the observed, and the fitted event yields in the SR.
The results of the ugt and cgt analyses differ slightly, but agree well within uncertainties. The event yields after the fit account for pulls of the nuisance parameters. The fitted discriminant distributions are shown in Figs. 6 and 7 for the ugt and cgt analyses, respectively.   The observed discriminant distributions are very well described by the fitted model and they are compatible with the background-only hypothesis.

Upper limits on cross-sections, EFT coefficients and branching ratios
Since the observed NN-discriminant distributions were found to be compatible with the background-only hypoth-esis, upper limits were set on the cross-sections of the ugt and the cgt processes at the 95% confidence level (CL). The limits were computed by applying the CL s method [93,94] as implemented in the RooFit package [95] to the test statistic In Eq. (2), the symbolsμ andˆ θ represent the values of the parameters maximising the likelihood function andˆ θ are the values of the nuisance parameters which maximise the likelihood function for a fixed value of μ. The obtained upper limits on the cross-sections times branching ratio are with B(W → ν) = 0.325 being the sum of branching ratios of all three leptonic decay modes of the W boson.
The expected cross-section-times-branching-ratio limits are 2.4 pb and 2.5 pb, respectively. The observed limits are larger than the expected ones because non-zero signal yields are fitted.
The cross-section limits are interpreted within the TopFCNC model [14], which implements an effective operator formalism and is based on the FeynRules 2.0 framework [96] used inside the MadGraph5_aMC@NLOevent generator. With this set-up the cross-sections of the FCNC processes under consideration were calculated at NLO in QCD, providing a significant improvement on LO calculations, since NLO corrections for this class of processes were found to be between 30% and 80% [14]. 4 In the TopFCNC model, the two operators O ut uG and O ct uG generate the ugt and cgt processes, and the coupling strengths of the corresponding vertices are given by the two coefficients C ut uG and C ct uG divided by the square of the new-physics scale . The total crosssections are found to be related to the EFT coefficients by pb TeV 4 and (5) Using Eqs. (5) and (6) the cross-section limits of Eqs. (3) and (4) become limits on the EFT coefficients: |C ut uG | 2 < 0.057 TeV −2 and |C ct uG | 2 < 0.14 TeV −2 at the 95% CL.
Since the u-quark is a valence quark of the proton, it carries on average a much larger momentum fraction than the c-quark, and thus the cross-section of the ugt process is much larger than the cross-section of the cgt process, when considering the same value of the corresponding coefficient (C ut uG = C ct uG ). For a certain experimental sensitivity, the sensitivity to C ut uG is therefore higher than to C ct uG . However, in the two-Higgs-doublet models mentioned in Sect. 1 the predicted FCNC couplings to charm quarks are much higher than to up quarks. For this reason, the limits on C ct uG have phenomenological relevance even though they are weaker than the limits on C ut uG . The limits presented in Eq. (7) tighten constraints set by the CMS Collaboration using dilepton events recorded in Run 2 of the LHC [97] by more than a factor of three. The CMS analysis searched for t W production crosssection via FCNC.
An alternative and very accessible way of comparing the upper limits on the EFT coefficient with previous results uses the branching ratios of FCNC top-quark decays: B(t → u + g) and B(t → c + g). These branching ratios are given as a function of the EFT coefficients by the relation with q = u, c [98], assuming the top-quark width to be t = 1.32 GeV. The resulting upper limits at the 95% CL are These new bounds are approximately a factor of two more restrictive than the previous ATLAS results obtained at a centre-of-mass energy of 8 TeV [12]. The bound on the cgt mode is comparable to that of the CMS analysis combining 7 and 8 TeV data [11], while the bound on the ugt mode is significantly weaker than the CMS one.

Comparison of expected upper limits
For assessing the sensitivity of this analysis and comparing it with the sensitivity of other results, and for evaluating the impact of different groups of systematic uncertainties, the computation of expected upper limits is more suitable than using the observed results, since biases caused by statistical fluctuations are avoided and the signal contribution is set to zero. The expected limits were derived by using the expected distributions of the NN discriminants, considering background processes only. The initially predicted rate of the W +jets process was scaled by a factor of μ(W + b) = 1.22 or μ(W − b) = 1.30 for the ugt analysis and by a factor of μ(W b) = 1.18 for the cgt analysis. These normalisation factors were obtained from background-only fits to the observed NN discriminants in background-dominated regions, namely Compared to the ATLAS analysis at 8 TeV centre-of-mass energy, significant improvements in sensitivity are obtained for both the ugt and cgt analyses. However, the improvements in sensitivity are smaller than expected from a simple scaling of the number of expected events with the increase in integrated luminosity and the increase in signal crosssections. The main reason for this effect is that the crosssections of the top-quark background processes rise faster with the centre-of-mass energy than the cross-sections of the FCNC signal processes.
The expected upper limits are lower than the observed upper limits in Eq. (8), since non-zero, yet insignificant, signals are observed, while the expected limits are obtained from expected distributions without any signal events included. The effect is larger for the cgt analysis than for the ugt analysis because the fitted signal event yield is more than three times larger in the cgt case, as seen in Table 3.
In order to quantify the impact of different groups of systematic uncertainties, expected upper limits were computed for different scenarios: (1) include only data statistical uncertainties, (2) include the experimental systematic uncertainties in addition, (3) include all systematic uncertainties except for the MC statistical uncertainties and (4) include all uncertainties. The last case leads to the limits quoted in Eq. (9). The results of this study are reported in Table 4 and clearly demonstrate how large the impact of systematic uncertainties is. Both the experimental and modelling uncertainties are relevant. MC statistical uncertainties increase the expected upper limits by approximately 20% in the ugt case and by about 10% for the cgt process.

Conclusions
A search for the production of a single top quark via lefthanded FCNC interactions of a top quark, a gluon and an up or charm quark was performed. The analysis used the full LHC Run 2 proton-proton collision data set recorded with the ATLAS detector at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 139 fb −1 . Events with exactly one electron or muon, exactly one b-tagged jet and missing transverse momentum were selected, resembling the decay products of a single top quark. A dedicated highpurity working point was devised for the identification of b-jets, reducing the background of W +c-jets and W +lightflavour jets considerably. Neural networks were used to separate signal events from background events, and a binned maximum-likelihood fit to the neural-network discriminants was performed to search for a contribution from the u+g → t and c + g → t processes. The observed distributions were found to be compatible with the background-only hypothesis and therefore upper limits on the production cross-sections times branching ratios were derived, leading to The cross-section limits were interpreted in the framework of an effective field theory, yielding limits on the coefficients of the operators producing the FCNC processes under investigation: |C ut uG |/ 2 < 0.057 TeV −2 and |C ct uG |/ 2 < 0.14 TeV −2 at the 95% confidence level. These limits are also expressed in terms of branching ratios of corresponding FCNC top-quark decays, resulting in B(t → u + g) < 0.61 × 10 −4 and B(t → c + g) The new bounds improve on previous ATLAS results obtained at a centre-of-mass energy of 8 TeV by approximately a factor of two.  [99].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .