Configuration and performance of the ATLAS b-jet triggers in Run 2

Several improvements to the ATLAS triggers used to identify jets containing b -hadrons ( b -jets) were implemented for data-taking during Run 2 of the Large Hadron Collider from 2016 to 2018. These changes include reconﬁguring the b -jet trigger software to improve primary-vertex ﬁnding and allow more stable running in conditions with high pile-up, and the implementation of the functionality needed to run sophisticated taggers used by the ofﬂine reconstruction in an online environment. These improvements yielded an order of magnitude better light-ﬂavour jet rejectionforthesame b -jetidentiﬁcationefﬁciencycompared to the performance in Run 1 (2011–2012). The efﬁciency to identify b -jets in the trigger, and the conditional efﬁciency for b -jets that satisfy ofﬂine b -tagging requirements to pass the trigger are also measured. Correction factors are derived to calibrate the b -tagging efﬁciency in simulation to match that observed in data. The associated systematic uncertainties are substantially smaller than in previous measurements. In addition, b -jet triggers were operated for the ﬁrst time during heavy-ion data-taking, using dedicated triggers that were developed to identify semileptonic b -hadron decays by selecting events with geometrically overlapping muons and jets.


Introduction
Techniques to identify jets containing -hadrons ( -jets) are widely used in ATLAS [1], both in searches for new physics and in measurements of Standard Model processes, including properties of the Higgs boson. The ability to select events containing -jets at the trigger level is crucial when studying or searching for processes containing -jets, especially those that do not provide any other distinguishing characteristics that are easier to identify, such as high transverse momentum ( T ) light leptons (electrons or muons) or missing transverse momentum. In particular, for measurements of processes such as →¯¯ [2,3], →¯produced via vector-boson fusion (VBF) [4,5], or all-hadronic¯( →¯) [6], or for searches for bottom squarks [7] or ( →¯) [8], efficient -jet triggers are crucial for the success of the analyses. In heavy-ion collisions, heavy-flavour jets are considered to be an important signature for understanding the flavour-dependence of radiative quark energy loss in the quark-gluon plasma [9].
Discriminating a -jet from charm ( ) and light-flavour (( , , )-quarkor gluon-initiated) jets relies on exploiting the properties of -hadrons, which have a relatively long lifetime, of the order of 1.5 ps. This leads to a displaced (secondary) vertex, typically a few millimetres from the hard-scatter interaction (primary) vertex. Tracks from the -hadron decay typically have a large transverse impact parameter, 0 , defined as the distance of closest approach to the primary vertex in the -projection. 1 A large longitudinal impact parameter, 0 , defined as the distance of closest approach along the -axis, is also a characteristic property of -jets. Both 0 and 0 are defined to have a positive sign if the track crosses the jet axis in front of the primary vertex with respect to the jet direction of flight, and negative otherwise. Additionally, -hadrons can decay semileptonically (either promptly, or via the decay of a subsequent c-hadron decay), to electrons or muons, with a branching ratio of ∼20% each, in which case they can be characterised by the presence of a relatively low T lepton that is geometrically matched to a jet. A schematic diagram of an interaction producing a -jet plus two light-flavour jets is shown in Figure 1 and illustrates some of the features that can be used to identify -jets.  Figure 1: A schematic diagram of an interaction producing two light-flavour jets and one -jet, shown in the transverse plane. The lifetime of -hadrons corresponds to a transverse decay length, (typically a few mm), and produces displaced tracks originating from a secondary vertex. The distance of closest approach of a displaced track to the primary vertex is defined as the transverse impact parameter, 0 , and typically is large for tracks originating from the decay of -hadrons. Conversely, jets initiated by light-flavour quarks or gluons will not exhibit these features and typically contain mostly prompt tracks originating from the primary vertex.
The identification of -jets requires precise tracking information in order to accurately reconstruct secondary 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upward. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Transverse momenta and energies are defined as T = sin and T = sin , respectively. Angular distance is measured in units of Δ = √︁ (Δ ) 2 + (Δ ) 2 ).
vertices and measure the impact parameters of tracks relative to the primary vertex. When -tagging is performed offline, precision tracking information is available for the entire detector, but the CPU requirements of this approach are prohibitively large for the trigger where the average processing time per event must not exceed 500 ms. Identifying -jets in the trigger therefore poses particular challenges, so the software is designed to use the available resources in an optimal way in order to provide the best possible performance.
The -jet trigger software can be broadly considered to consist of two steps: 1. Identifying the coordinates of the hard-scatter interaction point (primary-vertex finding).
2. Reconstructing secondary vertices and assessing the probability that a given jet originated from a -hadron decay ( -tagging).
Jets passing the specified transverse energy ( T ) requirements are used as seeds to identify which regions of the detector should be further processed in the trigger. One -jet trigger can make use of several different jet-T thresholds, by using all jets with T > 30 GeV for primary-vertex finding and variable T thresholds for jets to be evaluated for -tagging. Jet reconstruction and identification in the trigger is described in Section 4.
Two different tracking configurations are used in -jet triggers and are presented in Section 5: a 'Fast Tracking' algorithm for primary-vertex finding, and 'Precision Tracking' for -tagging. Different track-T thresholds (e.g. hard tracks for vertexing, softer tracks for -tagging) are also required.
Offline algorithms are used for primary-vertex finding [10] and -tagging [11] in order to maximise the correlation between the trigger and the offline reconstruction, since this provides the best overall performance for physics analyses where both components are required. In particular, the use of the same -tagging algorithms in both the offline and online environments significantly increases the overall efficiency for physics analyses that depend on -jet triggers because the same events are more likely to be accepted both by the trigger and offline than if different taggers are used. The offline taggers are also the most sophisticated taggers developed by the ATLAS Collaboration and therefore provide the best available signal selection and background rejection. The -tagging of jets is described in Section 6, where the performance of the -jet triggers is also shown.
ATLAS successfully used -jet triggers throughout the Run 1 data-taking campaign, and several improvements to the -jet triggers were implemented during the long shutdown period (2013-2014) to further improve performance for Run 2 (2015-2018) data-taking. The new -jet triggers were commissioned during 2015, while the Run-1-style -jet triggers (i.e. those that used the same software and -tagging algorithms as were used in Run 1 but benefited from other upgrades to the ATLAS detector and trigger system) were the primary triggers for physics analyses using the data taken that year. The new triggers were deployed online as the primary triggers from 2016 onward and these form the focus of this paper. The evolution of the -jet trigger menu (i.e. triggers that were run online) from 2016 to 2018 is described in Section 7.
The efficiency of the -jet triggers is evaluated in simulation and measured in data using the same likelihood-based method [11] that is used to evaluate the performance of the offline flavour-tagging. This calibration of the -jet triggers and their performance relative to offline flavour-tagging is described in Section 8.
Specially designed -jet triggers were implemented for running during lead ion (Pb+Pb) collisions provided by the Large Hadron Collider (LHC) [12] in 2018, to preferentially select semileptonic decays of the -hadrons, characterised by the presence of a low-T muon matched to a jet. This approach provided a mechanism to study -jets in Pb+Pb collisions, where the high rates and high CPU cost of running tracking algorithms on all jets meant that it was unfeasible to run the standard -jet triggers. The muon-jet triggers used during Pb+Pb data-taking are presented in Section 9.

ATLAS detector and trigger system
The ATLAS detector at the LHC covers nearly the entire solid angle around the collision point. It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroidal magnets.
The inner-detector system is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range | | < 2.5. The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable B-layer installed before Run 2 [13,14]. It is followed by the silicon microstrip tracker which usually provides eight measurements per track. These silicon detectors are complemented by the transition radiation tracker (TRT), which enables radially extended track reconstruction up to | | = 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold corresponding to transition radiation.
The calorimeter system covers the pseudorapidity range | | < 4.9. Within the region | | < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering | | < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadronic calorimetry is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures within | | < 1.7, and two copper/LAr hadronic endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic measurements respectively.
The muon spectrometer comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by the superconducting air-core toroids. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. A set of precision chambers covers the region | | < 2.7 with three layers of monitored drift tubes, complemented by cathode-strip chambers in the forward region, where the background is highest. The muon trigger system covers the range | | < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions.
Interesting events are selected by the first-level (L1) trigger system implemented in custom hardware, followed by selections made by algorithms implemented in software in the high-level trigger (HLT) [15]. The L1 trigger uses coarse-granularity signals from the calorimeters and the muon system with a 2.5 s fixed latency and accepts events from the 40 MHz bunch crossings at a rate below 100 kHz, which the HLT further reduces in order to record events to disk at about 1 kHz. Regions-of-interest (RoIs) from the L1 trigger are used to define 3D spatial regions of the detector. Only the RoIs selected by the L1 trigger are processed in the HLT, in order to minimise algorithm execution times and computing costs. Events accepted by the HLT are subsequently fully reconstructed offline.  [25]. The events were interfaced to P 8.230 [26] to model the parton shower, hadronisation, and underlying event, with parameter values set according to the A14 tune [27] and using the NNPDF2.3lo set of PDFs [28]. The decays of bottom and charm hadrons were performed by E G 1.6.0 [29]. The¯sample was normalised to a cross-section of 832 ± 51 pb, corresponding to the prediction at next-to-next-to-leading order in QCD including the resummation of next-to-next-to-leading logarithmic soft-gluon terms calculated using T ++2.0 [30][31][32][33][34][35][36]. At least one top quark was required to decay into a final state with a lepton. Other MC processes used in the -jet trigger efficiency measurement and calibration described in Section 8 are the same as those used in Ref. [11].
For certain studies (for example, the hybrid tuning described in Section 6.1), a sample of high-T simulated -jets was required. In these cases, simulated →¯events are used, where the boson has a mass of 1 TeV and has equal branching fractions to light-, -, and -flavour quark-antiquark pairs. The samples were generated using P 8.165 with the NNPDF2.3lo PDF set and the A14 set of tuned parameters.
The effect of multiple interactions per bunch crossing, as well as the effect on the detector response due to interactions from bunch crossings before or after the one containing the hard interaction, was modelled by overlaying the hard-scatter interactions with events from the P 8.160 generator, using the NNPDF2.3lo PDF set and the A3 parameter tune [37]. Simulated events were then processed through the ATLAS detector simulation [38] based on G 44 [39].
Jets in simulations are assigned labels based on geometric matching to particle-level information in the MC event record. Jets that are matched to a weakly decaying -hadron with T ≥ 5 GeV within Δ = 0.3 of the jet axis are labelled as -jets. If the -jet labelling requirements are not satisfied then the procedure is repeated for charm hadrons and then -leptons. Any remaining jets are labelled as light-flavour.
The LHC also operates a heavy-ion physics programme, where lead-lead (Pb+Pb), and proton-lead (p+Pb) collisions are used to study the quark-gluon plasma. Specially modified -jet triggers, designed to select semileptonic -hadron decays characterised by a muon geometrically matched to a jet, were operated during the 2018 Pb+Pb run where 1.7 nb −1 of data with a nucleon-nucleon centre-of-mass energy √ NN = 5.02 TeV and a peak luminosity of 6.2 × 10 27 cm −2 s −1 were collected.

Trigger jets
The -tagging of jets online (i.e. at the trigger level) requires that jets must first have been reconstructed by the trigger and required to pass a given transverse energy threshold, initially at L1, and subsequently in the HLT [40]. In general, only calorimeter information is used to identify and measure the properties of jets at the trigger level and they are characterised by their T . This is in contrast to the offline environment [41], where information from the tracking detectors is available for all jets and they are described in terms of their transverse momentum.

L1 jet reconstruction
Jets are identified by the L1 calorimeter trigger [42,43] in an 8 × 8 trigger-tower cluster that includes a 2 × 2 local maximum that defines the RoI's coordinates. Trigger towers are formed independently for the electromagnetic and hadronic calorimeter layers with a finer granularity of approximately Δ ×Δ = 0.1×0.1 in the central | | < 2.5 part of the detector and a coarser granularity for | | > 2.5. The summed energy of deposits in both the electromagnetic and hadronic calorimeters is required to pass the minimum T requirements of a given trigger item. Jets can be identified at L1 out to | | = 4.9, although usually only jets out to | | = 3.2 are considered for -jet trigger chains (and -tagging is only run on jets out to | | = 2.5). For the multi--jet triggers that have low T thresholds, jets are required to be within the acceptance of the tracking detectors (i.e. | | < 2.5) in order to lower the rates at L1. Requirements are placed on the L1 jets to select events for further processing in the HLT, and also to seed HLT jet reconstruction. A new topological trigger (L1Topo) [15] that uses field-programmable gate arrays (FPGAs) was installed and commissioned in 2016. L1Topo provides the functionality to make selections based on geometric or kinematic matching between different L1 objects and refine the selection criteria used at L1.

HLT jet reconstruction
Jets are reconstructed in the HLT using the anti-jet clustering algorithm [44,45]. Only jets with radius parameter = 0.4 were considered for -tagging during data-taking, although jets with radii of 0.2 or 0.3 were also used during the Pb+Pb data-taking in 2018. The calorimeter topoclusters [46] that are used as inputs to the HLT jet algorithm are reconstructed from the full set of calorimeter cell information and calibrated at the electromagnetic scale. The jets then are calibrated using a procedure similar to that used for offline jets [47], by subtracting contributions to the jet energy from pile-up and applying T -and -dependent calibration factors derived from simulations.
Two sets of jets are used in the -jet trigger. As a first step, all jets with T > 30 GeV are used to find the primary vertex of the event, as described in Section 4.2.1. In the second step, RoIs are constructed for jets passing the specific T threshold(s) of that trigger, as described in Section 4.2.2.

Super-RoI approach for primary-vertex finding
While the usual approach of sequentially processing individual RoIs is acceptable in 'quiet' events where only a few RoIs are selected, in events with significant activity, e.g. those with large jet multiplicities and/or higher pile-up, this approach can lead to the same regions of the detector being processed multiple times, as illustrated in Figure 2(a). In addition to the clear downside of wasting CPU resources, this approach has the added disadvantage of potentially biasing the primary-vertex finding (described in Section 5.1) by double-counting tracks in overlapping regions. An alternative approach is to consider an amalgamation of the individual RoIs, removing any overlapping regions so that these are only processed once (as illustrated in Figure 2(b)). This 'super-RoI' functionality provides a means to perform primary-vertex finding (along the beamline) in a uniform way, regardless of the jet thresholds fulfilled.
This approach was used for primary-vertex finding in the -jet triggers from 2016 onward, by consolidating all HLT jets with T > 30 GeV and | | < 2.5 into a super-RoI. The super-RoI constituents were defined with spatial dimensions of 0.2 for the and half-width (half of the full width) during 2016. In 2017 and 2018 these were reduced to 0.1 in both directions with negligible loss of -jet trigger performance. No constraint in the -direction is applied and the RoI covers the full range in of the detector (±225 mm around = 0).

Super-RoI
(b) Figure 2: A representation of the two different approaches to processing RoIs in the detector. In the standard approach (a), each RoI is treated separately, resulting in overlapping regions of the detector being processed multiple times. In the super-RoI approach (b) the different RoIs are amalgamated into a single complex region of detector space, thus avoiding the problems associated with processing the same detector region multiple times.

RoIs for -tagging jets
The jets that will be considered for -tagging are formed from RoIs with | | < 2.5 and a half-width in the and directions of 0.4 around the jet axis, with the apex centred on the primary-vertex position. A schematic diagram illustrating the RoI defined for a single jet (passing the relevant T requirements for each step) and used in the trigger is shown in Figure 3. The width along the -direction was conservatively constrained to be ±20 mm either side of the primary vertex during 2016, and optimised to ±10 mm in 2017 and 2018 with negligible loss of performance. This requirement dramatically reduces the volume that the tracking must be run on and makes the choice of an RoI -half-width of 0.4 affordable in terms of the CPU processing time of the trigger software. This RoI -half-width of 0.4 is comparable to the radius parameter of 0.4 used for anti-jets in the offline reconstruction and ensures that the jet is fully contained within the RoI volume. This provides better tagging performance, particularly for softer jets, than thehalf-width of 0.2 that was used for -jet triggers in Run 1. Jets selected for -tagging are also required to pass the specific T thresholds of that particular trigger. If these T requirements are not satisfied then the -jet trigger algorithms are terminated and no further processing is performed.

Global sequential jet calibration
An improved jet energy calibration scheme, the global sequential jet calibration (GSC) [47, 48] was introduced for 2017 data-taking in order to improve the jet energy resolution in the HLT. The GSC uses information about the longitudinal shower shapes of jets, and characteristics of associated tracks, to correct the energy scale of jets. The GSC profits from the availability of the primary vertex and precision tracking information already provided by the -jet trigger (described in Section 5). Using the calibrated jet T measurement from the GSC, a tighter jet selection can subsequently be applied to the jets evaluated for -tagging in the -jet trigger, resulting in better efficiency turn-on curves. The GSC is also used to improve the trigger efficiency turn-on curves for inclusive jet triggers.

Tracking and vertex finding
Tracking must be run inside the RoI of HLT jets in order to find the primary and secondary vertices, and extract information about the jet properties, including the likelihood that they originate from a heavy-flavour hadron decay.
The HLT tracking was redesigned for Run 2 in order to fully benefit from the merging of the two stages of the high-level trigger that had been used in Run 1 [15,49,50]. Information about hits in the silicon detectors is extracted for each RoI and a custom fast-tracking stage is used which generates triplets of hits that are then used to seed track candidates. The track candidates are then extended into the rest of the silicon detector using the offline combinatorial track-finding tool [51]. A fast Kalman filter [52] is subsequently used to define track candidates. These steps comprise the 'Fast Tracking' algorithm that is used by the -jet trigger for primary-vertex finding (described in Section 5.1). These tracks typically have a resolution of better than ∼100 m for their -position along the beamline.
Precision Tracking is also available in the HLT. The Fast Tracking algorithm is run as a first step, and tracks are subsequently passed to the offline ambiguity-solving algorithm [51] that (among other functions) removes duplicate tracks, and are extended into the TRT. This second stage greatly improves the resolution of the track parameters and removes many fake track candidates produced by the Fast Tracking, which is optimised for efficiency rather than purity. In the -jet trigger, the Precision Tracking is run on all jets that pass the minimum T thresholds to be further considered for -tagging (discussed in Section 5.2).

Primary-vertex finding
Precisely determining the position of the primary vertex of the event is the crucial first step when evaluating the probability that a jet is a -jet (the ' -tagging weight'). Only by knowing the primary-vertex position, can secondary vertices then be reconstructed and evaluated to determine the final -tagging weight.
The Fast Tracking algorithm is run for all regions of the detector encompassed by the super-RoI, described in Section 4.2.1, and the found tracks are used as inputs to the primary-vertex-finding algorithm. The same iterative primary-vertex-finding algorithm that is used offline [10] was used in the -jet trigger from 2016 onward. The algorithm looks for combinations of tracks that have compatible -positions and the primary vertex is chosen to be the one with the highest Σ 2 T of associated tracks. This improves the precision with which the primary vertex is reconstructed by approximately 10% (in each direction) compared with an alternative histogram-based approach used during Run 1 and in 2015 [50]. For the histogramming approach, the -coordinate positions of all tracks in an event, relative to the centre of the beamspot, were weighted by their T and used to populate a histogram with a 1 mm bin width. The centre of the most populated bin was taken to be the primary-vertex coordinate with the online beamspot position then used to define the and coordinates. A comparison of the performance of the histogram-based and iterative primary-vertex-finding algorithms used in the trigger is shown in Figure 4, which displays the differences between primary-vertex coordinates found online and offline in simulated¯events.
In Run 1 and 2015-2016, tracks with T > 1 GeV were considered for primary-vertex finding. In 2017 and 2018 this threshold was raised to 5 GeV, to reduce the CPU cost of primary-vertex finding (and its associated tracking) by a factor of five, with a negligible effect on the primary-vertex-finding efficiency or -jet trigger efficiencies.

Tracking for secondary-vertex finding and -tagging
For each trigger, jets are selected for further processing if they pass the lowest T threshold. Precision Tracking, consisting of the Fast Tracking plus ambiguity-solving steps, is run in the RoIs corresponding to these jets and all tracks with T > 1 GeV are kept. The tracks found at the primary-vertex-finding stage cannot be reused as the Fast Tracking inputs to the ambiguity-solving step of the Precision Tracking for -tagging, since different regions of the detector are considered for the two stages.
The tracks in the RoI are used together with information about the jet direction and the primary vertex as inputs to the -tagging algorithms (described in Section 6). , and (c) directions when using the histogramming approach and the iterative primary-vertex-finding algorithm in the -jet trigger. Selected events must pass a trigger requiring a single jet with T > 55 GeV. Tracks from all jets in the event that satisfy the Super-RoI requirements described in Section 4.2.1 are considered as inputs to the primary-vertex-finding algorithms.

Tracking performance in -jet triggers
To evaluate the performance of the tracking used in -jet triggers, offline tracks are selected and matched to online tracks using the procedure described in Ref. [50]. The efficiencies of the Fast and Precision Tracking algorithms used in the -jet triggers relative to the offline tracking are shown as a function of both the offline track transverse momentum and pseudorapidity in Figure 5. The 0 and 0 resolutions are shown in Figure 6. Both figures show results for the Fast Tracking within the super-RoI discussed in Section 4.2.1 that is used to find the primary vertex, and also results for the Fast and Precision Tracking that is used for secondary-vertex finding and -tagging within the individual jets. Results are produced by using dedicated ' -jet performance triggers' that require jet T thresholds of 55 GeV or 150 GeV and run the full tracking and -tagging software, but do not place any requirements on the -tagging weight of the jet. These provide an unbiased estimate of the tracking efficiency. Both triggers were prescaled during the data-taking period (meaning that not every event that satisfied the trigger requirements was recorded for further processing). The 150 GeV threshold trigger was run with a lower prescale factor, and correspondingly improved statistical precision, compared with the 55 GeV trigger, particularly at high transverse momenta. The data used were collected during a single run in 2018. The average T of tracks in the RoI is correlated with the jet T threshold of the trigger. The 150 GeV jet trigger therefore has a higher proportion of high-T tracks compared with the trigger that requires a 55 GeV jet. These differences in the track T spectra mean that the track reconstruction efficiency at low track T appears slightly worse in the 55 GeV trigger than in the 150 GeV trigger, as within a single bin, the former contains relatively more tracks at low T and the efficiency of some bins is therefore skewed by the steeply falling T distribution. Tracks selected by the lower T chain are therefore more sensitive to threshold effects when performing the matching to offline tracks, which also causes the integrated efficiency to be slightly lower. The 0 and 0 resolution distributions are largely insensitive to the jet T threshold of the trigger and so are only shown here for the data collected using the trigger with a 55 GeV threshold.
The Fast Tracking for the primary vertex is configured only to reconstruct tracks with T above 5 GeV, and so the efficiencies and resolutions are only evaluated for offline tracks that fulfil the same requirement. For the Fast and Precision Tracking used for the -tagging, the efficiencies and resolutions are calculated relative to offline tracks with transverse momentum above 1 GeV. The requirement of T > 5 GeV applied during pattern recognition in the Fast Tracking used for primary-vertex reconstruction means that the track-finding efficiency is very sensitive to the track momentum resolution around the offline track T threshold of 5 GeV, and also slightly reduces the track reconstruction efficiency at higher T . Partly as a consequence of this track T threshold, the presence of inactive pixel modules has the potential to affect the reconstruction of a large fraction of tracks in the super-RoI constituent; the narrowness of the individual RoIs means that the width of the individual constituent RoIs in both and may often span no more than a single module for the innermost pixel layers. The primary-vertex tracking at all transverse momenta is therefore very sensitive to inactive modules in these inner layers, and a reduction in the efficiency of up to a few percent is observed in some regions of . This results in a lower overall tracking efficiency when compared with either the Fast or Precision Tracking when executed in a wider region of interest. Since the purpose of the vertex tracking is only to identify the -position of the primary vertex for the second-stage Precision Tracking, the reduced track reconstruction efficiency does not lead to any significant performance loss in the trigger.
The efficiency is generally better than 99% at higher T but is somewhat lower for Precision Tracking near the 1 GeV track T threshold. The Precision Tracking efficiency in this first bin between 1 GeV and 1.2 GeV drops to 84% due to a tight selection in the transverse momentum of the candidates used by the  ambiguity solver, which is needed to reduce the execution time. For that reason, this efficiency point is not seen in Figure 5. This reduced efficiency near the threshold is the primary reason for the slightly lower efficiency seen in the Precision Tracking as a function of track pseudorapidity.
The 0 and 0 resolutions improve at higher transverse momenta to approximately 70 m and 20 m respectively, taking the mean across the full pseudorapidity range, and with a 0 resolution as low as 40 m for tracks perpendicular to the beamline. The deterioration of the tracking resolution at large | | as the tracks traverse more material at large angles can be seen clearly. An improvement of the 0 resolution by a factor of two at low T and by nearly 100 m in the endcap is observed for the Precision Tracking compared with the Fast Tracking. For 0 the improvement is 10 m at low T compared with the Fast Tracking, and is approximately 5 m at large T and central pseudorapidities.

HLT -jet identification
A schematic overview of the complete sequence of algorithms that form the -jet trigger is shown in Figure 7. The final stage of the -jet trigger is to assess the probability that jets that passed the required T thresholds originated from a -hadron decay. The output of the -tagging algorithm is evaluated for each individual jet, and the requirements of the trigger are assessed. If these are satisfied, the event is kept, otherwise it is discarded.

-tagging algorithms
The probability that a given jet originated from a -hadron decay is assessed by using low-level algorithms to match tracks to jets, reconstruct secondary vertices, and identify tracks with large impact parameters relative to the primary vertex. The same 'shrinking cone' algorithm that is used offline [11] is employed for matching tracks to jets. The outputs of these low-level -tagging algorithms are then used as inputs to multivariate algorithms that provide excellent discrimination between -jets and light-flavour jets or -jets.
Four low-level algorithms that exploit different features of -hadron decays are used in ATLAS: • IP2D: Uses the signed transverse impact parameter significance (defined as 0 / 0 , where 0 is the uncertainty on the reconstructed 0 ) of tracks associated with a jet [53]. Reference histograms derived using MC simulations provide probability density functions that are used to calculate the probabilities that a given track originated from a -jet, -jet, or light-flavour jet. The ratios of the per-track probabilities for each jet-flavour hypothesis are calculated, and their logarithms summed for all tracks to provide a per-jet probability of the jet's flavour origin. Three separate discriminants are defined, separating -jets from light-flavour jets, -jets from light-flavour jets, and -jets from -jets.
• IP3D: Uses a log-likelihood-ratio discriminant similar to those in IP2D, but uses both the transverse and longitudinal signed impact parameter significances to construct the track flavour origin probability density functions [53]. The longitudinal impact parameter significance is defined as 0 / 0 , where 0 is the uncertainty on the reconstructed 0 .  Figure 7: A schematic overview of the different components of the -jet trigger sequence. HLT jets (grey boxes) are used as inputs to the primary-vertex finding (pink boxes) and -tagging of jets that point towards the primary vertex (blue boxes). The GSC (dashed outline), as provided by the HLT jets (described in Section 4.3) can be applied as an optional step and in this case a second requirement is placed on the jet T , using the calibrated value.
• SV1: Creates two-track secondary vertices for all combinations of tracks associated with the jet [54].
The secondary vertices are identified using a Kalman filter [55] that uses the Billoir method [56]. Tracks compatible with decays of long-lived particles ( 0 S or Λ), photon conversions, or hadronic interactions with the detector are rejected. The algorithm iterates over all of the two-track vertices, trying to fit a single secondary vertex. At each iteration the fit is evaluated using a 2 test, and the track with the largest 2 is removed. The fit continues until the secondary vertex has an acceptable 2 , and the invariant mass of the track system associated with the vertex is less than 6 GeV. Discriminating variables are used as inputs to the higher-level taggers. When used as a stand-alone -tagging algorithm, the secondary-vertex mass, the ratio of the sum of the transverse momenta ( T ) of tracks associated with the secondary vertex to the sum of the T of all tracks in the jet (Σ( SV tracks T )/Σ( All tracks T )), and the number of two-track vertices are used to determine probability density functions for each jet flavour hypothesis. The probabilities are used as inputs to log-likelihood-ratio discriminants that separate -jets from light-flavour jets, -jets from light-flavour jets, and -jets from -jets.
• JetFitter: Exploits the topology of the / -hadron decay chain ( → → ) inside jets and uses a Kalman filter to find a common line consistent with the primary, -hadron decay, and -hadron decay vertices [57]. The -hadron flight path and vertex positions are approximated, and with this approach it is possible to resolve the -and -hadron decay vertices, even in cases where there is only a single track associated with them.
The final -tagging discriminant used during Run 1 and 2015 was based on the output of the IP3D and SV1 taggers, which were combined into a final weight and referred to as 'IP3D+SV1'. From 2016 onward it was possible to deploy the MV2 -tagging algorithm [11] that was developed for offline flavour-tagging in ATLAS, in the online environment. MV2 combines the outputs of the low-level IP2D, IP3D, SV1 and JetFitter algorithms into a boosted decision tree (BDT).
The transverse and longitudinal track impact parameters and their corresponding significances are key inputs to all of the -tagging algorithms described above and are shown in Figure 8 for light-flavour jets and -jets, when computed online and offline. Distributions of selected jet-level variables related to the IP3D, SV1 and JetFitter -tagging algorithms are shown in Figure 9. The distributions are shown for jets with T > 55 GeV and | | < 2.5 in simulated¯events. Good separation between light-flavour jets and -jets is observed. The differences in the distributions between HLT and offline quantities clearly motivate the necessity of reoptimising and retraining the multivariate algorithms for the online environment, and substantially improved performance is observed with dedicated reoptimisations.
The MV2 algorithms (and the low-level algorithms that form the inputs to MV2) were retrained for the online environment on simulated¯events and using HLT tracks and -tagging information to provide a discriminant to assess whether an individual jet arises from the hadronisation of a bottom or charm quark, or light-flavour quark or gluon. Tunings were performed using the same procedures adopted for offline flavour-tagging in ATLAS [11], further harmonising the procedures used in the trigger with those used offline. In 2016 a version of this tagger was used that was trained to identify -jets using a background sample composed of 80% light-flavour jets and 20% -jets and is denoted 'MV2c20'. In 2017 and 2018 the fraction of -jets in the background sample was reduced to 10% to mirror the evolution of the offline -tagging [58] and the algorithm is therefore denoted 'MV2c10'.
Working points for the MV2 algorithms were designed that mirror the offline working points providing 60%, 70%, 77%, and 85% -jet tagging efficiencies for -jets in the simulated¯sample. In addition, working points providing selection efficiencies of 40% and 50% for -jets were included in order to provide triggers Fraction of tracks / 0.2 mm 5 −

ATLAS Simulation
Offline, light-flavour jets HLT, light-flavour jets  with lower jet T thresholds. Requiring that jets are -tagged at the trigger level means that the jet T thresholds can be lowered significantly. For example, including the requirement that jets pass the MV2c10 tagger at a 40% (70%) working point allows the T threshold of single--jet triggers to be reduced to 225 (300) GeV, from a threshold of 420 GeV when no -tagging requirements are applied. Requiring more than one -tagged jet in a trigger allows jet T thresholds to be lowered even further. Four-jet triggers required T thresholds of 115 GeV when no -tagging requirements were applied, but these thresholds could be reduced to as low as 35 GeV when two of the jets are required to be -tagged (details of these triggers are provided in Section 7). Optimising the software throughout Run 2 in order to reduce the CPU cost of the -jet triggers meant that the rates rather than the CPU processing time were always the determining factor for the T threshold of triggers used for physics analysis.
MV2 was superseded in 2019 by the DL1r algorithm (described in Ref. [53]), which uses a deep feed-forward neural network to provide a multidimensional output corresponding to the probabilities for a jet to be a -jet, -jet, or light-flavour jet, and is now the default for offline physics analyses in ATLAS. This algorithm was not available in time to be used in the online environment, but provides the baseline against which the -jet trigger performance is measured (as described in Section 8).

-jet trigger performance
The performance of the -jet triggers is quantified by the probability of tagging a -jet ( -jet efficiency, ) and by the rejection power against -jets and light-flavour jets, where the rejection is defined as the inverse of their efficiency to pass the -tagging requirements. Jets are categorised as -jets, -jets or light-flavour jets following the particle-level definitions described in Section 3. Figure 10 shows the expected performance of the -jet trigger in terms of light-flavour jet and -jet rejection of the MV2c20 tagger together with the performance of the IP3D+SV1 tagger that was used during Run 1. The tuning is performed on simulated¯events with √ = 13 TeV. Jets used are required to have T > 55 GeV and | | < 2.5. An order of magnitude improvement in light-flavour jet rejection for the same -jet selection efficiency was achieved in 2016 compared with 2012 (Run 1). This performance increase is attributed to the installation of the insertable B-layer for Run 2, in conjunction with all of the software and algorithmic improvements described in this work. An additional factor ∼1.5 improvement in light-flavour jet rejection was attained in 2017 and 2018 by further optimising the use of the MV2 algorithm in the HLT. These improvements made it feasible to operate triggers with lower T thresholds and/or higher-efficiency working points than would have been affordable otherwise. The baseline configuration of -jet triggers in 2018 used the same tuning of MV2c10 that was deployed during the 2017 data-taking period. This was possible due to the general similarity between the running conditions in these two years. However, the -jet trigger menu included several triggers that used a dedicated tuning of MV2c10 intended to improve the performance of the -tagging algorithms at high-T (e.g. T 250 GeV) where it becomes harder to identify -jets. Following the same approach as is used for offline -tagging in ATLAS, the¯sample used for the baseline tuning was interleaved with a →s ample, which has a much larger proportion of jets at high T and therefore increases the attention of the BDT to these jets during training. The heavy vector boson ( ) is generated with a mass of 1 TeV with a flat T spectrum, and decays at equal rates into light-, -, and -flavour quark-antiquark pairs. This process, referred to as the 'hybrid tuning', provides the BDT with consistent exposure to both high-and low-T jets.
The performance of the baseline 2018 tuning (which uses only¯simulation in the training) and the hybrid tuning is compared in Figure 11. Little difference is observed between the online 2018 baseline and hybrid approaches in a sample dominated by low-T jets (¯). However, for the sample dominated by high-T jets ( →¯) the online hybrid tuning provides better rejection against light-flavour jets.

-jet trigger evolution during Run 2
Several different types of -jet triggers were operational throughout Run 2, where the T thresholds and -tagging requirements evolved in response to the increasing instantaneous luminosity during this time. Different combinations of jet and -jet multiplicities, with different T thresholds, with and without GSC calibrations (described in Section 4.3), and different -tagging algorithms and working points were used to provide optimal coverage for the different analyses using -jet triggers within the allocated trigger acceptance rate. Triggers that place requirements on the scalar sum of the T of hadronic objects in the event ( T ) were also provided. This set of -jet triggers was designed to provide optimal acceptance for processes targeted in current analyses, as well as to be general enough to provide good acceptance for yet-to-be-considered physics analyses.
The parameters defining the -jet triggers -including the ( -)jet multiplicity, T , and requirements, and the -tagging algorithm and working point(s) -are summarised for single--jet triggers in Table 2, di--jet triggers in Table 3, jet+di--jet triggers with asymmetric T thresholds in Table 4, di--jet+di-jet triggers in Table 5, and di--jet+ T triggers in Table 6. Table 2: Details, by year, of the lowest-threshold unprescaled single--jet triggers. The minimum ( -)jet T , | |, and -tagging requirements are specified for each item. HLT -jets are required to be within | | < 2.5. T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold.
Year L1 jet HLT -jet 1 × T > 360 GeV * , MV2c10, = 77% Table 3: Details, by year, of the lowest-threshold unprescaled di--jet triggers. The minimum ( -)jet T , | |, and -tagging requirements are specified for each item. HLT -jets are required to be within | | < 2.5. T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold.
Year L1 jet HLT -jets Triggers targeting specific physics processes involving -jets were also provided. Triggers requiring a di--jet plus missing transverse momentum ( miss T ) signature were designed to efficiently select pair-produced bottom squarks [7] and are detailed in Table 7. Higgs bosons produced via VBF and decaying into a pair of -quarks were also able to be efficiently selected at trigger level through the use of dedicated triggers that require jets with a large invariant mass in the forward region of the detector. Additionally, some triggers required the presence of a photon in the event (where the photon may be radiated either from a charged weak boson or from one of the scattering initial-state quarks that subsequently showers into a jet) [4,5]. The photon requirements significantly reduce the contribution from large multĳet backgrounds and allow lower T requirements at the trigger level to be placed on the -jets produced by the Higgs boson decay. The VBF plus -jet (plus photon) triggers are summarised in Table 8.  Table 4: Details, by year, of the lowest-threshold unprescaled triggers requiring a high-T jet plus two -jets signature, such as might arise from the process where a particle decaying into two -jets is accompanied by a jet from initial-or final-state radiation. No -tagging requirements are applied to this 'additional jet' in the event. The minimum ( -)jet T , | |, and -tagging requirements are specified for each item. HLT -jets are required to be within | | < 2.5. T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold.
Year L1 HLT -jets Additional jet Table 5: Details, by year, of the lowest-threshold unprescaled triggers requiring two -tagged jets plus an additional two jets with no -tagging requirements. The minimum ( -)jet T , | |, and -tagging requirements are specified for each item. HLT -jets are required to be within | | < 2.5. Additional HLT jets are accepted up to | | < 3.2 but in practice are mostly limited to be within | | < 2.5 because of the L1 requirements. T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold. The -tagging requirements were tightened to use a 60% efficiency working point for part of the data-taking during 2016.
Year L1 HLT -jets Additional jets Table 6: Details, by year, of the lowest-threshold unprescaled triggers giving a two--jet plus T signature. The minimum ( -)jet T , | |, and -tagging requirements are specified for each item. HLT -jets are required to be within | | < 2.5. T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold. The T is calculated at L1 by summing the T of the leading five jets with | | < 2.1, and at the HLT by summing the T of all jets with T > 30 GeV and | | < 3.2.
Year L1 HLT   Table 8: Details, by year, of the lowest-threshold unprescaled triggers giving a VBF plus -jet signature. Additionally, some triggers require the presence of a photon in the event, exploiting the unique phenomenology of the VBF process to help reject background processes and allow the use of lower jet T requirements at the trigger level. In these cases the photon is used to seed the trigger at L1, by requiring that the summed energy of deposits in the electromagnetic calorimeters (denoted 'EM') exceeds some minimum T and fulfils isolation requirements. The photon identification (ID) and isolation working points used are described in Ref. [59]. The minimum ( -)jet T , | |, and -tagging requirements are also specified for each item. HLT -jets are required to be within | | < 2.5. Some triggers additionally require that any pair of jets in the event satisfy a minimum requirement on their invariant mass ( jj ). T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold. Year

Calibrations
The trigger is a crucial step in the event selection of any physics analysis, so its performance must be understood and calibrated. This section describes the -jet trigger efficiency measurements made using data collected between 2016 and 2018. In physics analyses, the -jet trigger is always used in tandem with offline -tagging, which is calibrated without placing any requirements on the -jet trigger. A 'conditional' -jet trigger efficiency is therefore calculated relative to the offline -tagging efficiency and defined as the fraction of -jets that are -tagged offline and match an HLT jet, that also pass the -tagging requirements in the HLT. This conditional -jet trigger efficiency is measured in data and evaluated in simulatedē vents. Simulation-to-data scale factors (hereinafter referred to simply as scale factors) are derived to correct for any deviation of the -jet trigger performance in MC simulation from that observed in data. The scale factors are applied only to simulated events and are designed to be applied in addition to the offline -tagging scale factors [11]. The -jet trigger efficiency and scale factors are measured for all combinations of offline and online -tagging working points and only a few representative points are included here.
Historically, two methods have been used to calibrate the -jet triggers. A geometrical matching method similar to that described in Ref. [58] was used to provide preliminary calibrations for Run 2 data analysis but is now superseded by the likelihood-based method that is described here and has smaller associated uncertainties. The same likelihood-based method is also used to calibrate the offline reconstruction and identification of -jets in ATLAS and is described fully in Ref. [11]. The results presented here closely follow the analysis selection and method used for the offline -tagging calibration, and only the most important features of the likelihood-based calibration and its adaption to the online environment, together with the results, are described.
Scale factors to correct for any MC-simulation mismodelling of the rate for light-flavour jets and -jets to be misidentified as -jets are provided for offline -tagging [60,61]. Measuring the equivalent light-flavour and -jet scale factors in the trigger is beyond the scope of this paper, but the impact of these scale factors is expected to be small in physics analyses that use -jet triggers, where background processes are typically estimated using data-driven techniques and the signal processes, which are modelled using simulation, have a negligible fraction of non--jets.

Event selection
Top quarks are produced in abundance at the LHC and, since the branching fraction of the top quark decay into a boson and a -quark is nearly 100%, selecting events with pair-produced top quarks can provide a large data sample of -jets that can be used to study the -jet trigger efficiency. In order to reduce the contributions from multĳet and / +jets backgrounds, and maximise the purity of the selection, the offline selection requires events to have exactly one electron and one muon with opposite-sign charge and satisfying tight identification criteria. Furthermore, the electron and muon provide a signature that can be used to select events at the trigger level without using a -jet trigger such that no bias is introduced from online -tagging. These 'single-lepton -performance triggers' (detailed in Table 9) were designed and run specifically in order to study the performance of the -jet triggers, and require the presence of an electron or muon, plus two additional jets. The -jet trigger software is run on the jets and all associated -tagging information is kept, but no selection is made on the online -tagging weight of the jets. The triggers used for these measurements were run unprescaled, but in 2016 they were only run for part of the year and the integrated luminosity of that dataset is 13.1 fb −1 . Table 9: Details of the triggers used to select a data sample to perform the calibrations. Electrons (muons) are required to be isolated and pass a 'Tight' [59] ('Medium' [62]) identification working point. Jet T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated T threshold. All triggers were run unprescaled, but in 2016 they were only run for part of the year and the integrated luminosity of that dataset is 13.1 fb −1 .

Year
Lepton Jets Events are required to pass the following selection: • Pass one of the single-lepton -performance triggers detailed in Table 9.
• Contain an offline muon with T ≥ 28 GeV, | | < 2.4, satisfying the 'Tight' identification and isolation requirements [63], and no jet with three or more associated tracks within Δ of 0.4.
• Leptons are required to have | 0 |/ 0 less than 5 (3) for electrons (muons) and | 0 sin | less than 0.5 mm. These requirements ensure the selected leptons are prompt and associated with the primary vertex, defined as the collision vertex with the largest sum of 2 T of tracks, as described in Section 5.1. • The triggered lepton must match an offline electron or muon candidate.
-Not within Δ = 0.2 of an electron.
-Jets with less than three associated tracks must not be within Δ = 0.4 of a muon.
-Jets with T < 120 GeV are required to pass the 'Medium' working point of the Jet Vertex Tagger (JVT) algorithm [65] that is used to reduce the number of jets with large energy fractions from pile-up collision vertices. The JVT efficiency for jets originating from the hard scattering is 92% in the simulation.
After applying these requirements, approximately 90% of the selected events in simulation contain two real -jets. Light-flavour and -jet backgrounds are estimated using MC simulation and included in the likelihood fit, following the procedure described in Ref. [11]. Fake-lepton backgrounds are estimated from simulation and are negligible.

Calibration based on likelihood-based matching
Events passing the selection described in Section 8.1 are categorised according to the offline jet T and the output of the online and offline -tagging identification algorithms. Simulated events are further categorised by the particle-level label of the jets (as described in Section 3). A maximum-likelihood fit is then performed to extract the -tagging efficiency from data, as a function of jet T .
As in the offline measurement [11], a general extended binned log-likelihood function approach is used for the extraction of the -tagging efficiency and adapted to use only one signal region, i.e. where both jets pass -tagging requirements. This likelihood function can be written as follows: where tot is the total number of expected events,Θ = (Θ 1 , ..., Θ ) is the list of parameters to be estimated, including the parameters of interest and the nuisance parameters, and ( ) is the expected (observed) number of events in bin where bins are considered in total.
Events are divided into five categories based on offline -tagging working points. The first category does not apply any offline -tagging requirements, while the remaining four are based on the offline -tagging working points, corresponding to efficiencies of 85%, 77%, 70% and 60% for true -jets. For each category, events are divided into bins of offline jet T in order to account for any T  , is defined as the efficiency of a jet to be tagged as a -jet by the online -tagging algorithm if it has also passed the offline -tagging. Here (and elsewhere), 'off' denotes the offline -tagging, while 'trig' denotes the online -tagging.
In order to evaluate this conditional efficiency, only events in which both jets are already tagged by the offline -tagging are selected and the efficiency of the online -tagging in these events is evaluated. The ratio of the conditional efficiency measured in data to that evaluated in MC simulation is the conditional scale factor defined as Trig |Off,MC .
The overall efficiency for a jet to pass both the trigger and offline -tagging, Trig∧Off , is obtained for physics analysis by multiplying the conditional efficiency, Trig|Off , by the corresponding offline -tagging efficiency, Off (presented in Ref. [11]). As before, the scale factors are defined as the ratio of the efficiencies measured in data and evaluated in simulation.
Scale factors can also be derived in order to correct for -jets that have failed either the online or offline -tagging requirements (or both). The efficiencies of a given jet to satisfy a given combination of passing or failing the online and offline -tagging can be computed for all regions using the online-only ( Trig ), offline-only ( Off ), and conditional ( Trig |Off ) efficiencies, and employing Bayes' theorem. The efficiencies in each region can therefore be defined in the following way: (i) A jet that fails the trigger -tagging requirements and passes the offline -tagging requirements: Off .
(ii) A jet that passes the trigger -tagging requirements and fails the offline -tagging requirements: (iii) A jet that fails the trigger -tagging requirements and fails the offline -tagging requirements: In all cases the scale factors are subsequently defined as the ratio of the efficiencies measured in data and evaluated in simulation.

Results
The conditional -tagging efficiencies and the corresponding scale factors as a function of offline jet T are shown in Figures 12, 13, and 14 for 2016, 2017, and 2018, respectively. Efficiencies and scale factors are derived for all combinations of the MV2 algorithm working points used online (40%, 50%, 60%, 70%, 77%, 85%) and the DL1r algorithm used offline (60%, 70%, 77%, 85%). The -tagging conditional efficiency measurements were carried out separately for each year and consistent results were observed over time. The results are shown for two representative combinations (60% and 85% efficiency working points for both the online and offline -tagging algorithms), for triggers used in 2016 (Figure 12), 2017 The conditional efficiency obtained using the equivalent online and offline working points ranges from approximately 85% in the lowest T bins (33-45 GeV) to approximately 98% for higher-T jets. The conditional efficiency measured in data falls to ∼80% for jets with T > 200 GeV that were recorded in 2016 data and are required to pass the 60% efficiency working point both online and offline, as shown in Figure 12(a). It is noted that the efficiency measured in this region in data is lower than the MC prediction. Similar effects are observed for other combinations of working points in 2016 data, with the efficiencies being lowest for the tightest combinations of working points and recovered for the loosest combinations of working points, for example when the 85% efficiency working point is used both online and offline in Figure 12(c). The scale factors have values consistent with unity in most other regions of jet T and in data taken in other years, illustrating the generally good modelling of the online -tagging performance, although differences in the scale factors of up to ∼10% are observed in some bins. Uncertainties in the measurements are calculated following the same procedures as described in Ref. [11] and any additional sources of uncertainty specific to the trigger were found to be negligible. The total uncertainty in the measurement ranges from < 1% to about 5% across the full jet T range. Modelling uncertainties are present in both the numerator and the denominator of the conditional efficiency and so tend to cancel out, leaving the statistical uncertainty to dominate the measurement. Few data events satisfy all of the selection criteria described in Section 8.1 at very high jet T , and the statistical uncertainties associated with the results are largest in this region. For the online-only efficiencies with the tightest working points, the scale of the systematic uncertainty approaches that of the statistical uncertainty. In these cases, the largest systematic uncertainty comes from the modelling of top-quark events, in particular the impact of using a different parton shower and hadronisation model for simulated¯events. This uncertainty was evaluated by comparing the nominal¯sample with another event sample configured with the same setup to produce the matrix elements, but interfaced with H 7.04 [66,67], using the H7UE set of tuned parameters [67] and the MMHT2014 PDF set [68]. All other systematic uncertainties have a very small impact. A method for reducing the total number of uncertainties while preserving the bin-by-bin correlations is provided for use in physics analyses by performing an eigenvector decomposition. Versions of the scale factors that have been smoothed in jet T are also provided in order to prevent distortions in the variables of interest induced by the application of the scale factors. Both the eigenvector decomposition and the smoothing procedure are applied using the method described in Ref. [58].
Conditional efficiencies and scale factors are also provided for jets -tagged offline with the MV2c10 algorithm, using the same method, but not presented in this work as the MV2c10 algorithm is now superseded by DL1r. As expected, the conditional efficiencies are up to a few percent higher and the uncertainties are slightly reduced for the tightest combinations of working points when MV2c10 rather than DL1r is used offline, due to increased correlation between the online and offline -tagging algorithms. Any decrease in the degree of correlation between the taggers when moving from MV2c10 to DL1r for offline -tagging is more than compensated for in analyses by the improved performance that DL1r offers.
The -jet trigger conditional efficiency as a function of pile-up is shown for data and simulated¯events in

Muon-jet triggers
Approximately 20% of -jets contain a muon from the decay chain of the -hadron. These muons are typically soft and produced at small angles relative to the axis of the jet (typically within Δ = 0.5). The low T of these leptons plus the additional hadronic activity around them mean that they cannot be triggered on using the standard ATLAS lepton triggers [15] which include isolation requirements for all but the highest-T items, in order to reject fake-lepton backgrounds. Dedicated triggers are therefore designed to select low-T muons that are geometrically matched to a jet -a 'muon-jet'. Requiring the presence of a muon-jet in the event increases the rejection power against light-flavour jet backgrounds and allows these semileptonic -jet triggers to reach lower in jet T than the standard -jet triggers.
Muon-jet triggers are used to provide a sample of -jet-enriched data used to calibrate the -tagging algorithms used offline, and also have potential to enhance the acceptance efficiency for processes containing a large number of -jets and/or ones with low T (described in Section 9.2). They also provide the only way to select events containing -jets during lead-ion collision runs, where events typically have a large number of jets and high track multiplicity, and running the standard -jet triggers becomes unfeasible due to the high rates and high CPU cost of running tracking on all jets.

Muon-jet triggers for heavy-ion collisions
One of the open questions regarding the quark-gluon plasma (QGP) created in heavy-ion (HI) collisions at the LHC is the energy loss mechanisms that partons experience while traversing the hot and dense QCD medium [69]. Heavy quarks are produced at the early stages of the ion collisions in scattering processes that involve large momentum transfers, , so their formation time, of the order 1/ < 0.1 fm/ , is much smaller than the lifetime of the QGP, estimated to be 10-11 fm/ at the LHC [70]. The energy loss of heavy quarks in the QGP is predicted to be smaller than that of light-flavour quarks, due to the gluon radiation suppression at small angles -the so called 'dead cone' effect [9].
In 2018, ATLAS collected 1.42 nb −1 of data from collisions of lead ions with a nucleon-nucleon centre-ofmass energy √ NN = 5.02 TeV. Dedicated triggers were necessary not only to fulfil the specific physics requirements, but also to accommodate the different detector environment during Pb+Pb data-taking, resulting from the intrinsic geometry of the nuclear overlap leading to large variations of both track multiplicity and energy density, compared with runs. During Pb+Pb data-taking, it would be prohibitive to run the -jet triggers developed for collisions, owing to the high rates and large CPU cost of triggering in the relevant jet T range. Muon-jet triggers that require a muon and jet that are geometrically matched within Δ < 0.5 are used instead to provide a sample of data events that are enriched in semileptonic -hadron decays.
Several different muon-jet triggers imposing various combinations of muon T and jet T thresholds were provided. In most cases these were seeded at L1 by a single muon with T > 4 or 6 GeV, although in one instance a L1 jet was additionally required. In the HLT, a muon with T > 4 or 6 GeV within Δ = 0.5 of a jet with T > 30, 40, 50, or 60 GeV was required. Jets were reconstructed using the anti-algorithm with radius parameter = 0.2, 0.3 or 0.4, and corrected for the underlying event produced in heavy-ion collisions, as detailed in Ref. [71]. The list of triggers was designed to be optimal within the allocated trigger acceptance rate of approximately 80 Hz and is summarised in Table 10. In order to accommodate the increasing instantaneous luminosity during the data-taking period and ensure that that output rate remained within the rate allocation, the set of triggers that required a muon with T > 4 GeV and applied no additional jet requirements at L1 were prescaled for some runs. The prescale factors were applied coherently to all of the triggers and the values ranged from 1.0 (i.e. unprescaled) to 1.307. The average prescale factor across the entire Pb+Pb data-taking period in 2018 was 1.065.
The HLT conditional muon-jet trigger efficiency is defined as the number of offline muon-jet objects satisfying the muon-jet trigger requirements, divided by the total number of offline muon-jets that fired a where Trig+Off j is the number of muon-jet objects passing the HLT and offline muon-jet selections, and Trig+Off is the number of muon-jets passing the HLT and offline muon requirements.
The events passing the muon-jet trigger are an exact subset of events that pass the single-muon trigger with the same T threshold and so the absolute muon-jet trigger efficiency can be defined as the product of the conditional trigger efficiency given in Eq. (1), and the single-muon trigger efficiency ( ) which was measured using the method described in Ref. [72]: The performance of the muon-jet trigger is constrained by the limited acceptance of the L1 trigger, based on the information received from the calorimeters and muon trigger chambers. The geometric coverage of the latter is ∼99% in the endcap (1.05 < | | < 2.40) regions and ∼80% in the barrel region (| | < 1.05) [62]. The measurements are therefore made separately in the two pseudorapidity ranges. The efficiency is also measured for different categories of collision centrality, in order to account for a possible decrease in performance due to the characteristics of Pb+Pb collisions. The centrality of a collision is assessed on an event-by-event basis using the T deposited in the forward calorimeters, FCal T in 3.2 ≤ | | < 4.9. The Glauber MC model [73] is used to obtain a correspondence between the FCal T and the sampling fraction of the total inelastic Pb+Pb cross-section, allowing centrality percentiles to be set [74]. In this analysis, central collisions are defined as those in the 0-40% centrality interval where the contribution from underlying-event effects is the largest. Peripheral collisions are those within the 40-80% centrality interval.
The performance of muon-jet triggers where the muon T threshold is 4 GeV and the muon must be within Δ = 0.5 of a jet passing an T threshold of 40, 50 or 60 GeV is presented relative to the single-muon trigger that requires a muon with T > 4 GeV at L1 and in the HLT. 6 The efficiency of this single-muon trigger was measured in Ref. [72] to be approximately 80% and 85% in the barrel region, for central and peripheral collisions, respectively. This low efficiency is a consequence of the lower acceptance of the L1 trigger. In the endcap region the efficiency is noticeably higher, reaching 97%, and less sensitive to the centrality of the collision. Figure 16 compares the efficiency of the three muon-jet triggers as a function of the offline jet T for events passing the single-muon trigger and containing an offline muon with T > 12 GeV. In peripheral collisions and in the barrel region the efficiency is above 99% for offline jets with T larger than 46, 59, and 66 GeV (for triggers with 40, 50, and 60 GeV jet T thresholds, respectively). The efficiency saturates at slightly higher jet T values in the endcap region. In central collisions the turn-on is slower than in peripheral collisions and the range with full efficiency starts at higher T values. This sensitivity to the centrality of the collisions is also observed in inclusive jet trigger efficiency measurements. Figure 17 shows the two-dimensional absolute trigger efficiency, as defined in Eq. (2), for a muon-jet trigger requiring a muon with T > 4 GeV and jet with T > 40 GeV, and is shown as a function of the offline muon T and jet T . The efficiency of this trigger reaches a maximum for offline jet T 60 GeV but does not reach 100% in most regions. This lower efficiency, particularly in the barrel region, compared with the conditional efficiency shown in Figure 16, reflects the inefficiency of the muon trigger.

Muon-jet triggers for proton-proton collisions
Triggers with similar design and thresholds to those detailed in Table 10 were run prescaled during collision data-taking in order to collect a sample of data enriched with¯decays that are used to calibrate the offline flavour-tagging algorithms. In these cases, muon-jet triggers are seeded from either a single-muon or a muon-plus-jet requirement at L1. In the HLT, muons are required to satisfy Δ ( , jet) < 0.5 and Δ ( , jet) < 2 mm (where the -position of the jet is taken to be the primary-vertex -position), in order to be considered as 'matched' to a jet.
Muon-jet triggers to select interesting physics processes were also provided during 2016 data-taking, but were discontinued due to their prohibitively large CPU cost. For these triggers it was desirable to exploit other characteristic features of the process of interest, for example by placing additional requirements on the multiplicity, T , and -tagging weight of other jets in the event. In these cases, only jets that failed the matching requirements with the muon were considered for further processing (e.g. -tagging) by the -jet trigger software. The muon-jet can therefore form one component of a more complex trigger, for example by requiring that an event contains some combination of muon-jet(s), -tagged jet(s), untagged  (light-flavour) jet(s), or any other object that ATLAS is able to trigger on. These muon-jet triggers have the potential to be beneficial for analyses using collision data that have large -jet multiplicity (e.g. →¯¯), and/or for those that only have low-T -jets, e.g. ( →¯).

Summary
ATLAS has successfully operated -jet triggers throughout Runs 1 and 2 of the LHC. The -jet trigger software was completely redesigned during the long shutdown period that followed Run 1, was validated during 2015 data-taking, and became fully operational in 2016. The software uses a two-stage approach to improve primary-vertex finding and ensure stability under increasingly harsh pile-up conditions, and deploys state-of-the-art offline -tagging algorithms in the HLT. These changes, together with improved tracking performance in the trigger and the installation of the insertable B-layer for Run 2, lead to significantly improved performance compared to Run 1. Light-flavour jet rejection was improved by an order of magnitude for the same -jet selection efficiency in 2016 compared with the -jet triggers used in Run 1. An additional factor of ∼1.5 in light-flavour jet rejection was achieved in 2017 and 2018 by further optimising the use of the MV2 algorithm in the HLT, while simultaneously reoptimising the software to reduce the total CPU processing time by ∼30%. These improvements allowed ATLAS to maintain the T thresholds and -tagging working points of -jet triggers throughout Run 2, in spite of the increasingly harsh pile-up conditions.
The same likelihood-based method that is used to calibrate the offline -tagging algorithms in ATLAS is adapted for use with the -jet triggers for the first time. Conditional efficiencies are measured in data and evaluated in simulation for different combinations of online and offline working points for each year of data-taking (2016-2018). The conditional efficiencies are typically in the range 85%-97%, depending on the combination of working points considered. Good agreement of MC simulation with data is generally observed, and scale factors are provided to correct the simulation to data. The use of the likelihood method provides a substantial reduction in uncertainties compared to the geometrical matching approaches used previously, enabling the conditional efficiencies to be measured with a typical accuracy of a few percent.
Specially designed -jet triggers were also deployed for the first time during Pb+Pb data-taking in 2018, by adapting the -jet trigger software to identify semileptonic -hadron decays by selecting muons geometrically matched to a jet. These triggers reach an efficiency of > 99% with respect to both the single muon trigger and offline requirements above the jet T turn-on region, and provide a mechanism to study the flavour-dependence of radiative quark energy loss in the quark-gluon plasma, where the busy detector environment made it unfeasible to run the standard -jet triggers.