Configuration and performance of the ATLAS b-jet triggers in Run 2

Several improvements to the ATLAS triggers used to identify jets containing b-hadrons (b-jets) were implemented for data-taking during Run 2 of the Large Hadron Collider from 2016 to 2018. These changes include reconfiguring the b-jet trigger software to improve primary-vertex finding and allow more stable running in conditions with high pile-up, and the implementation of the functionality needed to run sophisticated taggers used by the offline reconstruction in an online environment. These improvements yielded an order of magnitude better light-flavour jet rejection for the same b-jet identification efficiency compared to the performance in Run 1 (2011–2012). The efficiency to identify b-jets in the trigger, and the conditional efficiency for b-jets that satisfy offline b-tagging requirements to pass the trigger are also measured. Correction factors are derived to calibrate the b-tagging efficiency in simulation to match that observed in data. The associated systematic uncertainties are substantially smaller than in previous measurements. In addition, b-jet triggers were operated for the first time during heavy-ion data-taking, using dedicated triggers that were developed to identify semileptonic b-hadron decays by selecting events with geometrically overlapping muons and jets.


Introduction
Techniques to identify jets containing b-hadrons (b-jets) are widely used in ATLAS [1], both in searches for new physics and in measurements of Standard Model processes, including properties of the Higgs boson. The ability to select events containing b-jets at the trigger level is crucial when studying or searching for processes containing b-jets, especially those that do not provide any other distinguishing characteristics that are easier to identify, such as high transverse momentum ( p T ) light leptons (electrons or muons) or missing transverse momentum. In particular, for measurements of processes such as H H → bbbb [2,3], H → bb produced via vectorboson fusion (VBF) [4,5], or all-hadronic tt H(H → bb) [6], or for searches for bottom squarks [7] or bφ(φ → bb) [8], efficient b-jet triggers are crucial for the success of the analyses. In heavy-ion collisions, heavy-flavour jets are considered to be an important signature for understanding the flavourdependence of radiative quark energy loss in the quark-gluon plasma [9].
Discriminating a b-jet from charm (c) and light-flavour ((u, d, s)-quark-or gluon-initiated) jets relies on exploiting The identification of b-jets requires precise tracking information in order to accurately reconstruct secondary vertices and measure the impact parameters of tracks relative to the primary vertex. When b-tagging is performed offline, precision tracking information is available for the entire detector, but the CPU requirements of this approach are prohibitively large for the trigger where the average time per event for data retrieval and processing and data retrieval per event must not exceed 500 ms. Identifying b-jets in the trigger therefore poses particular challenges, so the software is designed to use the available resources in an optimal way in order to provide the best possible performance.
The b-jet trigger software can be broadly considered to consist of two steps: 1. Identifying the coordinates of the hard-scatter interaction point (primary-vertex finding). 2. Reconstructing secondary vertices and assessing the probability that a given jet originated from a b-hadron decay (b-tagging).
Jets passing the specified transverse energy (E T ) requirements are used as seeds to identify which regions of the detector should be further processed in the trigger. One b-jet trigger can make use of several different jet-E T thresholds, by using all jets with E T > 30 GeV for primary-vertex finding and variable E T thresholds for jets to be evaluated for b-tagging. Jet reconstruction and identification in the trigger is described in Sect. 4.
Two different tracking configurations are used in b-jet triggers and are presented in Sect. 5: a 'Fast Tracking' algorithm for primary-vertex finding, and 'Precision Tracking' for b-tagging. Different trackp T thresholds (e.g. hard tracks for vertexing, softer tracks for b-tagging) are also required.
Offline algorithms are used for primary-vertex finding [10] and b-tagging [11] in order to maximise the correlation between the trigger and the offline reconstruction, since this provides the best overall performance for physics analyses where both components are required. In particular, the use of the same b-tagging algorithms in both the offline and online environments significantly increases the overall efficiency for physics analyses that depend on b-jet triggers because the same events are more likely to be accepted both by the trigger and offline than if different taggers are used. The offline taggers are also the most sophisticated taggers developed by the ATLAS Collaboration and therefore provide the best available signal selection and background rejection. The b-tagging of jets is described in Sect. 6, where the performance of the b-jet triggers is also shown.
ATLAS successfully used b-jet triggers throughout the Run 1 data-taking campaign, and several improvements to the b-jet triggers were implemented during the long shutdown period (2013-2014) to further improve performance for Run 2 (2015-2018) data-taking. The new b-jet triggers were commissioned during 2015, while the Run-1-style b-jet triggers (i.e. those that used the same software and b-tagging algorithms as were used in Run 1 but benefited from other upgrades to the ATLAS detector and trigger system) were the primary triggers for physics analyses using the data taken that year. The new triggers were deployed online as the primary triggers from 2016 onward and these form the focus of this paper. The evolution of the b-jet trigger menu (i.e. triggers that were run online) from 2016 to 2018 is described in Sect. 7.
The efficiency of the b-jet triggers is evaluated in simulation and measured in data using the same likelihood-based method [11] that is used to evaluate the performance of the offline flavour-tagging. This calibration of the b-jet triggers and their performance relative to offline flavour-tagging is described in Sect. 8.
Specially designed b-jet triggers were implemented for running during lead ion (Pb+Pb) collisions provided by the Large Hadron Collider (LHC) [12] in 2018, to preferentially select semileptonic decays of the b-hadrons, characterised by the presence of a lowp T muon matched to a jet. This approach provided a mechanism to study b-jets in Pb+Pb collisions, where the high rates and high CPU cost of running tracking algorithms on all jets meant that it was unfeasible to run the standard b-jet triggers. The muon-jet triggers used during Pb+Pb data-taking are presented in Sect. 9.

ATLAS detector and trigger system
The ATLAS detector at the LHC covers nearly the entire solid angle around the collision point. It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroidal magnets.
The inner-detector system is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable Blayer installed before Run 2 [13,14]. It is followed by the silicon microstrip tracker which usually provides eight measurements per track. These silicon detectors are complemented by the transition radiation tracker (TRT), which enables radially extended track reconstruction up to |η| = 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energydeposit threshold corresponding to transition radiation.
The calorimeter system covers the pseudorapidity range |η| < 4.9. Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadronic calorimetry is provided by the steel/scintillatortile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic measurements respectively.
The muon spectrometer comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by the superconducting air-core toroids. The field integral of the toroids ranges between 2.0 and 6.0 Tm across most of the detector. A set of precision chambers covers the region |η| < 2.7 with three layers of monitored drift tubes, complemented by cathode-strip chambers in the forward region, where the background is highest. The muon trigger system covers the range |η| < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions.
The trigger and data aquisition system is responsible for selecting, processing, and storing interesting events for offline data analysis. Events are selected using a two-stage trigger system which is described in detail in Refs. [15,16]. The first-level (L1) trigger system uses coarse-granularity signals from the calorimeters and the muon system with a 2.5 μs fixed latency and accepts events from the 40 MHz bunch crossings at a rate below 100 kHz. Regions-of-interest (RoIs) from the L1 trigger are used to define 3D spatial regions of the detector. The L1 trigger decision is formed by the Central Trigger Processor (CTP), which is also responsible for applying preventative deadtime, limiting the time between accepted events to be within the detector read-out latency [17]. The peak inefficiency due to this deadtime was approximately 1% in Run 2. When an event is selected by the L1 trigger, data from the front-end electronics of all detector subsystems is read out. After some initial processing and formatting of the data, events are buffered in the ReadOut System (ROS) before being sent to the second stange of the trigger, the high-level trigger (HLT).
The HLT is a software-based trigger, making use of dedicated reconstruction algorithms to further refine the event selection decision process. Only the RoIs selected by the L1 trigger are processed in the HLT, in order to minimise algorithm execution times and computing costs. Events accepted by the HLT are transferred to local storage and exported to the Tier-0 facility at CERN to be fully reconstructed offline. An extensive software suite [18] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Datasets and simulated events
The results presented here use data from proton-proton ( pp) collisions with a centre-of-mass energy √ s = 13 TeV, collected during Run 2 of the LHC, between 2016 and 2018.
The b-jet triggers were monitored during the on-going runs as an early-warning alert mechanism to spot problems and improve data quality. Monitored variables include the jet and track multiplicities, the primary-and secondary-vertex positions, the variables used as inputs to the b-tagging algorithms, and the output discriminants of the taggers. Histograms of these variables were compared with reference histograms, using an automated evaluation system and checked by shift-personnel in the ATLAS Control Room. A more indepth evaluation of the data quality was performed offline, soon after the data were recorded, and used as the input to a per-luminosity-block 2 evaluation of the suitability of the data for use in physics analysis. Data quality monitoring in the ATLAS trigger system is described in Ref. [19].
Large discrepancies between data and simulation were observed in b-jet trigger efficiencies (compared with the offline b-tagging) at the start of the 2016 data-taking campaign. The cause of this was found to be that the performance of the algorithm used to determine the hard-scatter primary-vertex position depended on the nominal online beamspot position (the centre of the region where the two proton bunches cross in the detector). The nominal beamspot position is estimated online by averaging the primary-vertex position over many events [19]. The track reconstruction in the trigger uses the nominal online beamspot position while the online primary-vertex position is defined relative to the detector origin, (x = 0, y = 0, z = 0). A mismatch in the handling these two coordinate systems resulted in the online primary-vertex-finding algorithm failing to efficiently reconstruct the vertex position in cases where the mean z-position of the interaction region (z online beamspot ) was far from the nominal z = 0 origin used elsewhere in ATLAS software. The problem was resolved during 2016 data-taking and a b-jet-triggeraware good run list (GRL) 3 is provided to reject events with |z online beamspot | > 2 mm in the affected data. Further information is available in Ref. [20]. The application of the b-jet-triggeraware GRL reduces the integrated luminosity of the 2016 dataset from 32.9 to 24.6 fb −1 . A more stringent GRL is provided for use in precision measurements and tightens this requirement so as to reject events with |z online beamspot | > 1 mm, which reduces the integrated luminosity further to 20.6 fb −1 . In all years, luminosity blocks at the start of each run asso- Table 1 The maximum instantaneous luminosity (L), the peak pile-up ( μ ), the average μ , and integrated luminosity ( L) per year, after applying the b-jet-trigger-aware GRL for each year of pp collision datataking ciated with an out-of-date or invalid beamspot position are discarded. This additional requirement reduces the integrated luminosity of the 2017 and 2018 datasets by approximately 1.5% compared with the baseline ATLAS GRL. The maximum instantaneous luminosity, and therefore the average number of pp interactions per bunch crossing under constant beam conditions, μ , commonly referred to as 'pile-up', increased by a factor of four during Run 2. This information, together with the integrated luminosity of the datasets after requiring stable beam conditions and the b-jettrigger-aware GRL described above, is summarised for each year of Run 2 data-taking in Table 1. Uncertainties in the integrated luminosity are obtained using the methods discussed in Ref. [21] and the LUCID-2 detector [22] for the primary luminosity measurements.
Monte Carlo (MC) simulations of top-quark pairs (tt) produced in pp collisions are used throughout this paper to provide a sample of simulated b-, c-, and light-flavour jets. The production of tt events was modelled using the Powheg Box v2 [23][24][25][26] generator at next-to-leading order with the NNPDF3.0nlo [27] parton distribution function (PDF) set and the h damp parameter 4 set to 1.5 m top [28]. The events were interfaced to Pythia 8.230 [29] to model the parton shower, hadronisation, and underlying event, with parameter values set according to the A14 tune [30] and using the NNPDF2.3lo set of PDFs [31]. The decays of bottom and charm hadrons were performed by EvtGen 1.6.0 [32]. The tt sample was normalised to a cross-section of 832 ± 51 pb, corresponding to the prediction at next-tonext-to-leading order in QCD including the resummation of next-to-next-to-leading logarithmic soft-gluon terms calculated using Top++2.0 [33][34][35][36][37][38][39]. At least one top quark was required to decay into a final state with a lepton. Other MC processes used in the b-jet trigger efficiency measurement and calibration described in Sect. 8 are the same as those used in Ref. [11].
For certain studies (for example, the hybrid tuning described in Sect. 6.1), a sample of high-E T simulated b-jets was required. In these cases, simulated Z → qq events are used, where the Z boson has a mass of 1 TeV and has equal branching fractions to light-, c-, and b-flavour quarkantiquark pairs. The samples were generated using Pythia 8.165 with the NNPDF2.3lo PDF set and the A14 set of tuned parameters.
The effect of multiple pp interactions per bunch crossing, as well as the effect on the detector response due to interactions from bunch crossings before or after the one containing the hard interaction, was modelled by overlaying the hardscatter interactions with events from the Pythia 8.160 generator, using the NNPDF2.3lo PDF set and the A3 parameter tune [40]. Simulated events were then processed through the ATLAS detector simulation [41] based on Geant4 [42].
Jets in simulations are assigned labels based on geometric matching to particle-level information in the MC event record. Jets with radius R = 0.4 that are matched to a weakly decaying b-hadron with p T ≥ 5 GeV within R = 0.3 of the jet axis are labelled as b-jets. If the b-jet labelling requirements are not satisfied then the procedure is repeated for charm hadrons and then τ -leptons. Any remaining jets are labelled as light-flavour.
The LHC also operates a heavy-ion physics programme, providing lead-lead (Pb+Pb), and proton-lead (p+Pb) collisions. Specially modified b-jet triggers, designed to select semileptonic b-hadron decays characterised by a muon geometrically matched to a jet, were operated during the 2018 Pb+Pb run where 1.7 nb −1 of data with a nucleon-nucleon centre-of-mass energy √ s NN = 5.02 TeV and a peak luminosity of 6.2 × 10 27 cm −2 s −1 were collected.

Trigger jets
The b-tagging of jets online (i.e. at the trigger level) requires that jets must first have been reconstructed by the trigger and required to pass a given transverse energy threshold, initially at L1, and subsequently in the HLT [43]. In general, only calorimeter information is used to identify and measure the properties of jets at the trigger level and they are characterised by their E T . This is in contrast to the offline environment [44], where information from the tracking detectors is available for all jets and they are described in terms of their transverse momentum.

L1 jet reconstruction
Jets are identified by the L1 calorimeter trigger [45,46] in an 8 × 8 trigger-tower cluster that includes a 2 × 2 local maximum that defines the RoI's coordinates. Trigger towers are formed independently for the electromagnetic and hadronic calorimeter layers with a finer granularity of approximately η × φ = 0.1 × 0.1 in the central |η| < 2.5 part of the detector and a coarser granularity for |η| > 2.5. The summed energy of deposits in both the electromagnetic and hadronic calorimeters is required to pass the minimum E T requirements of a given trigger item. Jets can be identified at L1 out to |η| = 4.9, although usually only jets out to |η| = 3.2 are considered for b-jet trigger chains (and b-tagging is only run on jets out to |η| = 2.5). For the multib-jet triggers that have low E T thresholds, jets are required to be within the acceptance of the tracking detectors (i.e. |η| < 2.5) in order to lower the rates at L1. Requirements are placed on the L1 jets to select events for further processing in the HLT, and also to seed HLT jet reconstruction. A new topological trigger (L1Topo) [15] that uses fieldprogrammable gate arrays (FPGAs) was installed and commissioned in 2016. L1Topo provides the functionality to make selections based on geometric or kinematic matching between different L1 objects and refine the selection criteria used at L1.

HLT jet reconstruction
Jets are reconstructed in the HLT using the anti-k t jet clustering algorithm [47,48]. Only jets with radius parameter R = 0.4 were considered for b-tagging during pp datataking, although jets with radii of 0.2 or 0.3 were also used during the Pb+Pb data-taking in 2018. The calorimeter topoclusters [49] that are used as inputs to the HLT jet algorithm are reconstructed from the full set of calorimeter cell information and calibrated at the electromagnetic scale. The jets then are calibrated using a procedure similar to that used for offline jets [50], by subtracting contributions to the jet energy from pile-up and applying E T -and η-dependent calibration factors derived from simulations. Two sets of jets are used in the b-jet trigger. As a first step, all jets with E T > 30 GeV are used to find the primary vertex of the event, as described in Sect. 4.2.1. In the second step, RoIs are constructed for jets passing the specific E T threshold(s) of that trigger, as described in Sect. 4.2.2.

Super-RoI approach for primary-vertex finding
While the usual approach of sequentially processing individual RoIs is acceptable in 'quiet' events where only a few RoIs are selected, in events with significant activity, e.g. those with large jet multiplicities and/or higher pile-up, this approach can lead to the same regions of the detector being processed multiple times, as illustrated in Fig. 2a. In addition to the clear downside of wasting CPU resources, this approach has the added disadvantage of potentially biasing the primaryvertex finding (described in Sect. 5.1) by double-counting tracks in overlapping regions. An alternative approach is to consider an amalgamation of the individual RoI constituents, with each corresponding to a single jet, and removing any  overlapping regions so that these are only processed once (as illustrated in Fig. 2b). This 'super-RoI' functionality provides a means to perform primary-vertex finding (along the beamline) in a uniform way, regardless of the jet thresholds fulfilled. This approach was used for primary-vertex finding in the b-jet triggers from 2016 onward, by consolidating all HLT jets with E T > 30 GeV and |η| < 2.5 into a super-RoI. The individual jet RoI constituents which constitute the super-RoI were defined with spatial dimensions of 0.2 for the η and φ half-width (half of the full width) during 2016. In 2017 and 2018 these were reduced to 0.1 in both directions with negligible loss of b-jet trigger performance. No constraint in the z-direction is applied and the RoI covers the full range in z of the detector (±225 mm around z = 0).

RoIs for b-tagging jets
The jets that will be considered for b-tagging are formed from RoIs with |η| < 2.5 and a half-width in the η and φ directions of 0.4 around the jet axis, with the apex centred on the primary-vertex position. A schematic diagram illustrating the RoI defined for a single jet (passing the relevant E T requirements for each step) and used in the trigger is shown in Fig. 3. The width along the z-direction was conservatively constrained to be ±20 mm either side of the primary vertex during 2016, and optimised to ±10 mm in 2017 and 2018 with negligible loss of performance. This requirement dramatically reduces the volume that the tracking must be run on and makes the choice of an RoI η-φ half-width of 0.4 affordable in terms of the CPU processing time of the trigger software. This RoI η-φ half-width of 0.4 is comparable to the radius parameter of 0.4 used for anti-k t jets in the offline reconstruction and ensures that the jet is fully contained within the RoI volume. This provides better tagging performance, particularly for softer jets, than the η-φ halfwidth of 0.2 that was used for b-jet triggers in Run 1. Jets selected for b-tagging are also required to pass the specific E T thresholds of that particular trigger. If these E T requirements are not satisfied then the b-jet trigger algorithms are terminated and no further processing is performed.

Global sequential jet calibration
An improved jet energy calibration scheme, the global sequential jet calibration (GSC) [50,51] was introduced for 2017 data-taking in order to improve the jet energy resolution in the HLT. The GSC uses information about the longitudinal shower shapes of jets, and characteristics of associated tracks, to correct the energy scale of jets. The GSC profits from the availability of the primary vertex and precision tracking information already provided by the b-jet trigger (described in Sect. 5). Using the calibrated jet E T measurement from the GSC, a tighter jet selection can subsequently be applied to the jets evaluated for b-tagging in the b-jet trigger, resulting in better efficiency turn-on curves. The GSC is also used to improve the trigger efficiency turn-on curves for inclusive jet triggers.

Tracking and vertex finding
Tracking must be run inside the RoI of HLT jets in order to find the primary and secondary vertices, and extract information about the jet properties, including the likelihood that they originate from a heavy-flavour hadron decay.
The HLT tracking was redesigned for Run 2 in order to fully benefit from the merging of the two stages of the highlevel trigger that had been used in Run 1 [15,52,53]. Information about hits in the silicon detectors is extracted for each RoI and a custom fast-tracking stage is used which generates triplets of hits that are then used to seed track candidates. The track candidates are then extended into the rest of the silicon detector using the offline combinatorial track-finding tool [54]. A fast Kalman filter [55] is subsequently used to define track candidates. These steps comprise the 'Fast Tracking' algorithm that is used by the b-jet trigger for primary-vertex finding (described in Sect. 5.1). These tracks typically have a resolution of better than ∼ 100 µm for their z-position along the beamline.
Precision Tracking is also available in the HLT. The Fast Tracking algorithm is run as a first step, and tracks are subsequently passed to the offline ambiguity-solving algorithm [54] that (among other functions) removes duplicate tracks, and are extended into the TRT. This second stage greatly improves the resolution of the track parameters and removes many fake track candidates produced by the Fast Tracking, which is optimised for efficiency rather than purity. In the b-jet trigger, the Precision Tracking is run on all jets that pass the minimum E T thresholds to be further considered for b-tagging (discussed in Sect. 5.2).

Primary-vertex finding
Precisely determining the position of the primary vertex of the event is the crucial first step when evaluating the probability that a jet is a b-jet (the 'b-tagging weight'). Only by knowing the primary-vertex position, can secondary vertices then be reconstructed and evaluated to determine the final b-tagging weight.
The Fast Tracking algorithm is run for all regions of the detector encompassed by the super-RoI, described in Sect. 4.2.1, and the found tracks are used as inputs to the primary-vertex-finding algorithm. The same iterative primary-vertex-finding algorithm that is used offline [10] was used in the b-jet trigger from 2016 onward. The algorithm looks for combinations of tracks that have compatible z-positions and the primary vertex is chosen to be the one with the highest p 2 T of associated tracks. This improves the precision with which the primary vertex is reconstructed by approximately 10% (in each direction) compared with an alternative histogram-based approach used during Run 1 and in 2015 [53]. For the histogramming approach, the zcoordinate positions of all tracks in an event, relative to the centre of the beamspot, were weighted by their p T and used to populate a histogram with a 1 mm bin width. The centre of the most populated bin was taken to be the primary-vertex z coordinate with the online beamspot position then used to define the x and y coordinates. A comparison of the performance of the histogram-based and iterative primary-vertexfinding algorithms used in the trigger is shown in Fig. 4, which displays the differences between primary-vertex coordinates found online and offline in simulated tt events. The performance of primary-vertex-finding algorithms in the trigger is presented in detail in Ref. [53].
In Run 1 and 2015-2016, tracks with p T > 1 GeV were considered for primary-vertex finding. In 2017 and 2018 this threshold was raised to 5 GeV, to reduce the CPU cost of primary-vertex finding (and its associated tracking) by a factor of five, with a negligible effect on the primary-vertexfinding efficiency or b-jet trigger efficiencies.

Tracking for secondary-vertex finding and b-tagging
For each trigger, jets are selected for further processing if they pass the lowest E T threshold. Precision Tracking, consisting of the Fast Tracking plus ambiguity-solving steps, is run in the RoIs corresponding to these jets and all tracks with p T > 1 GeV are kept. The tracks found at the primary- The tracks in the RoI are used together with information about the jet direction and the primary vertex as inputs to the b-tagging algorithms (described in Sect. 6).

Tracking performance in b-jet triggers
To evaluate the performance of the tracking used in b-jet triggers, offline tracks are selected and matched to online tracks using the procedure described in Ref. [53]. The efficiencies of the Fast and Precision Tracking algorithms used in the b-jet triggers relative to the offline tracking are shown as a function of both the offline track transverse momentum and pseudorapidity in Fig. 5. The d 0 and z 0 resolutions are shown in Fig. 6. Both figures show results for the Fast Tracking within the super-RoI discussed in Sect. 4.2.1 that is used to find the primary vertex, and also results for the Fast and Precision Tracking that is used for secondary-vertex finding and b-tagging within the individual jets. Results are produced by using dedicated 'b-jet performance triggers' that require jet E T thresholds of 55 GeV or 150 GeV and run the full tracking and b-tagging software, but do not place any requirements on the b-tagging weight of the jet. These provide an unbiased estimate of the tracking efficiency. Both triggers were prescaled during the data-taking period (meaning that not every event that satisfied the trigger requirements was recorded for further processing). The 150 GeV threshold trigger was run with a lower prescale factor, and correspondingly improved statistical precision, compared with the 55 GeV trigger, particularly at high transverse momenta. The data used were collected during a single run in 2018. The average p T of tracks in the RoI is correlated with the jet E T threshold of the trigger. The 150 GeV jet trigger therefore has a higher proportion of highp T tracks compared with the trigger that requires a 55 GeV jet. These differences in the track p T spectra mean that the track reconstruction efficiency at low track p T appears slightly worse in the 55 GeV trigger than in the 150 GeV trigger, as within a single bin, the former contains relatively more tracks at low p T and the efficiency of some bins is therefore skewed by the steeply falling p T distribution. Tracks selected by the lower E T chain are therefore more sensitive to threshold effects when performing the matching to offline tracks, which also causes the integrated efficiency to be slightly lower. The d 0 and z 0 resolution distributions are largely insensitive to the jet E T threshold of the trigger and so are only shown here for the data collected using the trigger with a 55 GeV threshold.
The Fast Tracking for the primary vertex is configured only to reconstruct tracks with p T above 5 GeV, and so the efficiencies and resolutions are only evaluated for offline tracks that fulfil the same requirement. For the Fast and Precision Tracking used for the b-tagging, the efficiencies and resolutions are calculated relative to offline tracks with transverse momentum above 1 GeV. The requirement of p T > 5 GeV applied during pattern recognition in the Fast Tracking used for primary-vertex reconstruction means that the track-finding efficiency is very sensitive to the track momentum resolution around the offline track p T threshold of 5 GeV, and also slightly reduces the track reconstruction efficiency at higher p T . Partly as a consequence of this track p T threshold, the presence of inactive pixel modules has the potential to affect the reconstruction of a large fraction of tracks in the super-RoI constituent; the narrowness of the individual RoIs means that the width of the individual constituent RoIs in both η and φ may often span no more than a single module for the innermost pixel layers. The primaryvertex tracking at all transverse momenta is therefore very sensitive to inactive modules in these inner layers, and a reduction in the efficiency of up to a few percent is observed in some regions of φ. This results in a lower overall tracking efficiency when compared with either the Fast or Precision Tracking when executed in a wider region of interest. Since the purpose of the vertex tracking is only to identify the zposition of the primary vertex for the second-stage Precision Tracking, the reduced track reconstruction efficiency does not lead to any significant performance loss in the trigger.
The efficiency is generally better than 99% at higher p T but is somewhat lower for Precision Tracking near the 1 GeV track p T threshold. The Precision Tracking efficiency in this first bin between 1 GeV and 1.2 GeV drops to 84% due to a tight selection in the transverse momentum of the candidates used by the ambiguity solver, which is needed to reduce the execution time. For that reason, this efficiency point is not seen in Fig. 5. This reduced efficiency near the threshold is the primary reason for the slightly lower efficiency seen in the Precision Tracking as a function of track pseudorapidity.
The z 0 and d 0 resolutions improve at higher transverse momenta to approximately 70 µm and 20 µm respectively, taking the mean across the full pseudorapidity range, and with a z 0 resolution as low as 40 μm for tracks perpendicular to the beamline. The deterioration of the tracking resolution at large |η| as the tracks traverse more material at large angles can be seen clearly. An improvement of the z 0 resolution by a factor of two at low p T and by nearly 100 μm in the endcap is observed for the Precision Tracking compared with the Fast Tracking. For d 0 the improvement is 10 μm at low p T compared with the Fast Tracking, and is approximately 5 μm at large p T and central pseudorapidities.

HLT b-jet identification
A schematic overview of the complete sequence of algorithms that form the b-jet trigger is shown in Fig. 7. The final stage of the b-jet trigger is to assess the probability that jets that passed the required E T thresholds originated from a b-hadron decay. The output of the b-tagging algorithm is evaluated for each individual jet, and the requirements of the trigger are assessed. If these are satisfied, the event is kept, otherwise it is discarded.

b-tagging algorithms
The probability that a given jet originated from a b-hadron decay is assessed by using low-level algorithms to match tracks to jets, reconstruct secondary vertices, and identify tracks with large impact parameters relative to the primary vertex. The same 'shrinking cone' algorithm that is used offline [11] is employed for matching tracks to jets. The outputs of these low-level b-tagging algorithms are then used as inputs to multivariate algorithms that provide excellent discrimination between b-jets and light-flavour jets or c-jets.
Four low-level algorithms that exploit different features of b-hadron decays are used in ATLAS: • IP2D: Uses the signed transverse impact parameter significance (defined as d 0 /σ d 0 , where σ d 0 is the uncertainty on the reconstructed d 0 ) of tracks associated with a jet [56]. Reference histograms derived using MC simulations provide probability density functions that are used to calculate the probabilities that a given track originated from a b-jet, c-jet, or light-flavour jet. The ratios of the per-track probabilities for each jet-flavour hypothesis are calculated, and their logarithms summed for all tracks to provide a per-jet probability of the jet's flavour origin. Three separate discriminants are defined, separating bjets from light-flavour jets, c-jets from light-flavour jets, and b-jets from c-jets. • IP3D: Uses a log-likelihood-ratio discriminant similar to those in IP2D, but uses both the transverse and longitudinal signed impact parameter significances to construct the track flavour origin probability density functions [56]. The longitudinal impact parameter significance is defined as z 0 /σ z 0 , where σ z 0 is the uncertainty on the reconstructed z 0 . . Tracks compatible with decays of long-lived particles (K 0 S or ), photon conversions, or hadronic interactions with the detector are rejected. The algorithm iterates over all of the two-track vertices, trying to fit a single secondary vertex. At each iteration the fit is evaluated using a χ 2 test, and the track with the largest χ 2 is removed. The fit continues until the secondary vertex has an acceptable χ 2 , and the invariant mass of the track system associated with the vertex is less than 6 GeV. Discriminating variables are used as inputs to the higher-level taggers. When used as a stand-alone b-tagging algorithm, the secondary-vertex mass, the ratio of the sum of the transverse momenta ( p T ) of tracks associated with the secondary vertex to the sum of the p T of all tracks in the jet ( ( p SV tracks T )/ ( p All tracks T )), and the number of two-track vertices are used to determine probability density functions for each jet flavour hypothesis. The probabilities are used as inputs to log-likelihoodratio discriminants that separate b-jets from light-flavour jets, c-jets from light-flavour jets, and b-jets from c-jets. [11]) that was developed for offline flavour-tagging in ATLAS, in the online environment. MV2 combines the outputs of the low-level IP2D, IP3D, SV1 and JetFitter algorithms into a boosted decision tree (BDT). The transverse and longitudinal track impact parameters and their corresponding significances are key inputs to all of the b-tagging algorithms described above and are shown in Fig. 8 for light-flavour jets and b-jets, when computed online and offline. Distributions of selected jet-level variables related to the IP3D, SV1 and JetFitter b-tagging algorithms are shown in Fig. 9. The distributions are shown for jets with E T > 55 GeV and |η| < 2.5 in simulated tt events. Good separation between light-flavour jets and b-jets is observed. The differences in the distributions between HLT and offline quantities clearly motivate the necessity of reoptimising and retraining the multivariate algorithms for the online environment, and substantially improved performance is observed with dedicated reoptimisations.
The MV2 algorithms (and the low-level algorithms that form the inputs to MV2) were retrained for the online environment on simulated tt events and using HLT tracks and b-tagging information to provide a discriminant to assess whether an individual jet arises from the hadronisation of a bottom or charm quark, or light-flavour quark or gluon. Tunings were performed using the same procedures adopted for offline flavour-tagging in ATLAS [11], further harmonising the procedures used in the trigger with those used offline. In 2016 a version of this tagger was used that was trained to identify b-jets using a background sample composed of 80% light-flavour jets and 20% c-jets and is denoted 'MV2c20'. In 2017 and 2018 the fraction of c-jets in the background sample was reduced to 10% to mirror the evolution of the offline b-tagging [61] and the algorithm is therefore denoted 'MV2c10'.
Working points for the MV2 algorithms were designed that mirror the offline working points providing 60%, 70%, 77%, and 85% b-jet tagging efficiencies for b-jets in the simulated tt sample. In addition, working points providing selec- Fig. 7 A schematic overview of the different components of the b-jet trigger sequence. HLT jets (grey boxes) are used as inputs to the primary-vertex finding (pink boxes) and b-tagging of jets that point towards the primary vertex (blue boxes). The GSC (dashed outline), as provided by the HLT jets (described in Sect. 4.3) can be applied as an optional step and in this case a second requirement is placed on the jet E T , using the calibrated value tion efficiencies of 40% and 50% for b-jets were included in order to provide triggers with lower jet E T thresholds. Requiring that jets are b-tagged at the trigger level means that the jet E T thresholds can be lowered significantly. For example, including the requirement that jets pass the MV2c10 tagger at a 40% (70%) working point allows the E T threshold of single-b-jet triggers to be reduced to 225 (300) GeV, from a threshold of 420 GeV when no b-tagging requirements are applied. Requiring more than one b-tagged jet in a trigger allows jet E T thresholds to be lowered even further. Four-jet triggers required E T thresholds of 115 GeV when no b-tagging requirements were applied, but these thresholds could be reduced to as low as 35 GeV when two of the jets are required to be b-tagged (details of these triggers are provided in Sect. 7). The total processing time for the b-jet triggers is dominated by the jet-finding and tracking that are used as inputs to the b-tagging algorithms. The mean time to evaluate the b-tagging weight of a single jet is 16.2 ms (for < μ >= 52), once the jet and tracks have been found. Optimising the software throughout Run 2 in order to reduce the CPU cost of the b-jet triggers meant that the rates rather than the CPU processing time were always the determining factor for the E T threshold of triggers used for physics analysis.
MV2 was superseded in 2019 by the DL1r algorithm (described in Ref. [56]), which uses a deep feed-forward neural network to provide a multidimensional output cor-responding to the probabilities for a jet to be a b-jet, c-jet, or light-flavour jet, and is now the default for offline physics analyses in ATLAS. This algorithm was not available in time to be used in the online environment, but provides the baseline against which the b-jet trigger performance is measured (as described in Sect. 8).

b-jet trigger performance
The performance of the b-jet triggers is quantified by the probability of tagging a b-jet (b-jet efficiency, ε b ) and by the rejection power against c-jets and light-flavour jets, where the rejection is defined as the inverse of their efficiency to pass the b-tagging requirements. Jets are categorised as b-jets, c-jets or light-flavour jets following the particle-level definitions described in Sect. 3. Figure 10 shows the expected performance of the b-jet trigger in terms of light-flavour jet and c-jet rejection of the MV2c20 tagger together with the performance of the IP3D+SV1 tagger that was used during Run 1. The tuning is performed on simulated tt events with √ s = 13 TeV. Jets used are required to have E T > 55 GeV and |η| < 2.5. An order of magnitude improvement in light-flavour jet rejection for the same b-jet selection efficiency was achieved in 2016 compared with 2012 (Run 1). This performance increase is attributed to the installation of the insertable B-layer for Run 2, in conjunction with all of the software and algorithmic improvements described in this work. An additional factor ∼1.5 improvement in lightflavour jet rejection was attained in 2017 and 2018 by further optimising the use of the MV2 algorithm in the HLT. These improvements made it feasible to operate triggers with lower E T thresholds and/or higher-efficiency working points than would have been affordable otherwise.
The baseline configuration of b-jet triggers in 2018 used the same tuning of MV2c10 that was deployed during the 2017 data-taking period. This was possible due to the general similarity between the running conditions in these two years. However, the b-jet trigger menu included several triggers that used a dedicated tuning of MV2c10 intended to improve the performance of the b-tagging algorithms at high-E T (e.g. E T 250 GeV) where it becomes harder to identify b-jets. Following the same approach as is used for offline b-tagging in ATLAS, the tt sample used for the baseline tuning was interleaved with a Z → qq sample, which has a much larger proportion of jets at high E T and therefore increases the attention of the BDT to these jets during training. The heavy vector boson (Z ) is generated with a mass of 1 TeV with a flat p T spectrum, and decays at equal rates into light-, c-, and b-flavour quark-antiquark pairs. This process, referred to as the 'hybrid tuning', provides the BDT with consistent exposure to both high-and low-E T jets.
The performance of the baseline 2018 tuning (which uses only tt simulation in the training) and the hybrid tuning is compared in Fig. 11. Little difference is observed between the online 2018 baseline and hybrid approaches in a sample dominated by low-E T jets (tt). However, for the sample dominated by high-E T jets (Z → qq) the online hybrid tuning provides better rejection against light-flavour jets.

b-jet trigger evolution during Run 2
Several different types of b-jet triggers were operational throughout Run 2, where the E T thresholds and b-tagging requirements evolved in response to the increasing instantaneous luminosity during this time. Different combinations of jet and b-jet multiplicities, with different E T thresholds, with and without GSC calibrations (described in Sect. 4.3), and different b-tagging algorithms and working points were used to provide optimal coverage for the different analyses using b-jet triggers within the allocated trigger acceptance rate. The total rate for the full suite of b-jet triggers was up to 180 Hz at peak luminosity. Triggers that place requirements on the scalar sum of the E T of hadronic objects in the event (H T ) were also provided. This set of b-jet triggers was designed to provide optimal acceptance for processes targeted in current analyses, as well as to be general enough to provide good acceptance for yet-to-be-considered physics analyses. The parameters defining the b-jet triggers -including the (b-)jet multiplicity, E T , and η requirements, and the b-tagging algorithm and working point(s) -are summarised for single-b-jet triggers in Table 2, di-b-jet triggers in Table 3, jet+di-b-jet triggers with asymmetric E T thresholds in Table 4, di-b-jet+di-jet triggers in Table 5, and di-b-jet+H T triggers in Table 6.
Triggers targeting specific physics processes involving b-jets were also provided. Triggers requiring a di-b-jet plus missing transverse momentum (E miss T ) signature were designed to efficiently select pair-produced bottom squarks [7] and are detailed in Table 7. Higgs bosons produced via VBF and decaying into a pair of b-quarks were also able to be efficiently selected at trigger level through the use of dedicated triggers that require jets with a large invariant mass in the forward region of the detector. Additionally, some triggers required the presence of a photon in the event (where the photon may be radiated either from a charged weak boson or from one of the scattering initial-state quarks that subsequently showers into a jet) [4,5]. The photon requirements significantly reduce the contribution from large multijet backgrounds and allow lower E T requirements at the trigger level to be placed on the b-jets produced by the Higgs boson decay. The VBF plus b-jet (plus photon) triggers are summarised in Table 8.

Calibrations
The trigger is a crucial step in the event selection of any physics analysis, so its performance must be understood and calibrated. This section describes the b-jet trigger efficiency measurements made using pp data collected between 2016 and 2018. In physics analyses, the b-jet trigger is always used in tandem with offline b-tagging, which is calibrated without placing any requirements on the b-jet trigger. A 'conditional' b-jet trigger efficiency is therefore calculated relative to the offline b-tagging efficiency and defined as the fraction of b-jets that are b-tagged offline and match an HLT jet, that also pass the b-tagging requirements in the HLT. This conditional b-jet trigger efficiency is measured in data and evaluated in simulated tt events. Simulation-to-data scale factors (hereinafter referred to simply as scale factors) are derived to correct for any deviation of the b-jet trigger performance in MC simulation from that observed in data. The scale factors are applied only to simulated events and are designed to be applied in addition to the offline b-tagging scale factors [11]. The b-jet trigger efficiency and scale factors are measured for all combinations of offline and online b-tagging working points and only a few representative points are included here.
Historically, two methods have been used to calibrate the b-jet triggers. A geometrical matching method similar to that described in Ref.
[61] was used to provide preliminary calibrations for Run 2 data analysis but is now superseded by the likelihood-based method that is described here and has smaller associated uncertainties. The same likelihoodbased method is also used to calibrate the offline reconstruction and identification of b-jets in ATLAS and is described fully in Ref. [11]. The results presented here closely fol- Table 2 Details, by year, of the lowest-threshold unprescaled singleb-jet triggers. The minimum (b-)jet E T , |η|, and b-tagging requirements are specified for each item. HLT b-jets are required to be within |η| < 2.5. E T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated E T threshold Table 3 Details, by year, of the lowest-threshold unprescaled di-b-jet triggers. The minimum (b-)jet E T , |η|, and b-tagging requirements are specified for each item. HLT b-jets are required to be within |η| < 2.5.
E T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated E T threshold Year L1 jet HLT b-jets Table 4 Details, by year, of the lowest-threshold unprescaled triggers requiring a highp T jet plus two b-jets signature, such as might arise from the process where a particle decaying into two b-jets is accompanied by a jet from initial-or final-state radiation. No b-tagging requirements are applied to this 'additional jet' in the event. The minimum (b-)jet E T , |η|, and b-tagging requirements are specified for each item.
HLT b-jets are required to be within |η| < 2.5. E T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated MV2c10, ε = 70% Table 5 Details, by year, of the lowest-threshold unprescaled triggers requiring two b-tagged jets plus an additional two jets with no b-tagging requirements. The minimum (b-)jet E T , |η|, and b-tagging requirements are specified for each item. HLT b-jets are required to be within |η| < 2.5. Additional HLT jets are accepted up to |η| < 3.2 but in practice are mostly limited to be within |η| < 2.5 because of the L1 requirements. E T thresholds denoted * indicate that the GSC was applied and the value quoted is the calibrated E T threshold. The b-tagging requirements were tightened to use a 60% efficiency working point for part of the data-taking during 2016 Year L1 HLT

b-jets
Additional jets 2 × E T > 55 GeV * , MV2c10, ε = 50% Table 7 Details, by year, of the lowest-threshold unprescaled triggers giving a b-jet plus missing transverse momentum (E miss T ) signature. The minimum E miss T and (b-)jet E T , |η|, and b-tagging requirements are specified for each item. HLT b-jets are required to be within |η| < 2.5 Year L1 HLT low the analysis selection and method used for the offline b-tagging calibration, and only the most important features of the likelihood-based calibration and its adaption to the online environment, together with the results, are described. Scale factors to correct for any MC-simulation mismodelling of the rate for light-flavour jets and c-jets to be misidentified as b-jets are provided for offline b-tagging [63,64]. Measuring the equivalent light-flavour and c-jet scale factors in the trigger is beyond the scope of this paper, but the impact of these scale factors is expected to be small in physics analyses that use b-jet triggers, where background processes are typically estimated using data-driven techniques and the signal processes, which are modelled using simulation, have a negligible fraction of non-b-jets.

Event selection
Top quarks are produced in abundance at the LHC and, since the branching fraction of the top quark decay into a W boson and a b-quark is nearly 100%, selecting events with pairproduced top quarks can provide a large data sample of b-jets that can be used to study the b-jet trigger efficiency. In order to reduce the contributions from multijet and W/Z +jets backgrounds, and maximise the purity of the selection, the offline selection requires events to have exactly one electron and one muon with opposite-sign charge and satisfying tight identification criteria. Furthermore, the electron and muon provide a signature that can be used to select events at the trigger level without using a b-jet trigger such that no bias is introduced from online b-tagging. These 'single-lepton b-performance triggers' (detailed in Table 9) were designed and run specifically in order to study the performance of the b-jet triggers, and require the presence of an electron or muon, plus two additional jets. The b-jet trigger software is run on the jets and all associated b-tagging information is kept, but no selection is made on the online b-tagging weight of the jets. The triggers used for these measurements were run unprescaled, but in 2016 they were only run for part of the year and the integrated luminosity of that dataset is 13.1 fb −1 .
Events are required to pass the following selection: • Pass one of the single-lepton b-performance triggers detailed in Table 9. • Contain an offline muon with p T ≥ 28 GeV, |η| < 2.4, satisfying the 'Tight' identification and isolation requirements [66], and no jet with three or more associated tracks within R of 0.4. • Contain an offline electron with E T ≥ 28 GeV, 5 |η| < 2.47 excluding 1.37 ≤ |η| < 1.52, satisfying the 'Tight' identification and isolation requirements [67]. • Leptons are required to have |d 0 |/σ d 0 less than 5 (3) for electrons (muons) and |z 0 sin θ | less than 0.5 mm. These requirements ensure the selected leptons are prompt and associated with the primary vertex, defined as the collision vertex with the largest sum of p 2 T of tracks, as described in Sect. 5.1.
• The triggered lepton must match an offline electron or muon candidate.
-Matched to an HLT jet, within R(online, offline) < 0.2. -Not within R = 0.2 of an electron.
-Jets with less than three associated tracks must not be within R = 0.4 of a muon. -Jets with p T < 120 GeV are required to pass the 'Medium' working point of the Jet Vertex Tagger (JVT) algorithm [68] that is used to reduce the number of jets with large energy fractions from pile-up collision vertices. The JVT efficiency for jets originating from the hard pp scattering is 92% in the simulation.
• Each lepton is paired with a jet, where the pairing is assigned by minimising (m 2 i ,j1 + m 2 j ,j2 ) , where j1 (j2) is the leading (sub-leading) jet, i,j are the two leptons, and m ,j1 (m ,j2 ) is the invariant mass of the leading (subleading) jet and its associated lepton. Events where either m ,j1 or m ,j2 is more than 175 GeV are rejected.
After applying these requirements, approximately 90% of the selected events in simulation contain two real b-jets. Lightflavour and c-jet backgrounds are estimated using MC simulation and included in the likelihood fit, following the procedure described in Ref. [11]. Fake-lepton backgrounds are estimated from simulation and are negligible.

Calibration based on likelihood-based matching
Events passing the selection described in Sect. 8.1 are categorised according to the offline jet p T and the output of the online and offline b-tagging identification algorithms. Simulated events are further categorised by the particle-level label of the jets (as described in Sect. 3). A maximum-likelihood fit is then performed to extract the b-tagging efficiency from data, as a function of jet p T .
As in the offline measurement [11], a general extended binned log-likelihood function approach is used for the extraction of the b-tagging efficiency and adapted to use only one signal region, i.e. where both jets pass b-tagging requirements. This likelihood function can be written as follows: where ν tot is the total number of expected events,ˆ = ( 1 , . . . , m ) is the list of parameters to be estimated, including the parameters of interest and the nuisance parameters, and ν i (n i ) is the expected (observed) number of events in bin i where N bins are considered in total.
Events are divided into five categories based on offline b-tagging working points. The first category does not apply any offline b-tagging requirements, while the remaining four are based on the offline b-tagging working points, corresponding to efficiencies of 85%, 77%, 70% and 60% for true b-jets. For each category, events are divided into bins of offline jet p T in order to account for any p T -dependence of the scale factors. The p T binning used is [35-45, 45- , is defined as the efficiency of a jet to be tagged as a b-jet by the online b-tagging algorithm if it has also passed the offline b-tagging. Here (and elsewhere), 'off' denotes the offline b-tagging, while 'trig' denotes the online b-tagging.
In order to evaluate this conditional efficiency, only events in which both jets are already tagged by the offline b-tagging are selected and the efficiency of the online b-tagging in these events is evaluated. The ratio of the conditional efficiency measured in data to that evaluated in MC simulation is the conditional scale factor defined as The overall efficiency for a jet to pass both the trigger and offline b-tagging, ε Scale factors can also be derived in order to correct for b-jets that have failed either the online or offline b-tagging requirements (or both). The efficiencies of a given jet to satisfy a given combination of passing or failing the online and offline b-tagging can be computed for all regions using the online-only (ε ) efficiencies, and employing Bayes' theorem. The efficiencies in each region can therefore be defined in the following way: (i) A jet that fails the trigger b-tagging requirements and passes the offline b-tagging requirements: (ii) A jet that passes the trigger b-tagging requirements and fails the offline b-tagging requirements: (iii) A jet that fails the trigger b-tagging requirements and fails the offline b-tagging requirements: In all cases the scale factors are subsequently defined as the ratio of the efficiencies measured in data and evaluated in simulation.

Results
The conditional b-tagging efficiencies and the corresponding scale factors as a function of offline jet p T are shown in Figs. 12, 13, and 14 for 2016, 2017, and 2018, respectively. Efficiencies and scale factors are derived for all combinations of the MV2 algorithm working points used online (40%, 50%, 60%, 70%, 77%, 85%) and the DL1r algorithm used offline (60%, 70%, 77%, 85%). The b-tagging conditional efficiency measurements were carried out separately for each year and consistent results were observed over time. The results are shown for two representative combinations (60% and 85% efficiency working points for both the online and offline b-tagging algorithms), for triggers used in 2016 (Fig. 12), 2017 (Fig. 13), and 2018 (Fig. 14). Similar behaviour is observed for all combinations of online and offline working points. The conditional efficiency obtained using the equivalent online and offline working points ranges from approximately 85% in the lowest p T bins (33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45) to approximately 98% for higherp T jets. The conditional efficiency measured in data falls to ∼ 80% for jets with p T > 200 GeV that were recorded in 2016 data and are required to pass the 60% efficiency working point both online and offline, as shown in Fig. 12a. It is noted that the efficiency measured in this region in data is lower than the MC prediction. Similar effects are observed for other combinations of working points in 2016 data, with the efficiencies being lowest for the tightest combinations of working points and recovered for the loosest combinations of working points, for example when the 85% efficiency working point is used both online and offline in Fig. 12c. The scale factors have values consistent with unity in most other regions of jet p T and in data taken in other years, illustrating the generally good modelling of the online b-tagging performance, although differences in the scale factors of up to ∼10% are observed in some bins.
Uncertainties in the measurements are calculated following the same procedures as described in Ref.
[11] and any additional sources of uncertainty specific to the trigger were found to be negligible. The total uncertainty in the measurement ranges from < 1% to about 5% across the full jet p T range. Modelling uncertainties are present in both the numerator and the denominator of the conditional efficiency and so tend to cancel out, leaving the statistical uncertainty to dominate the measurement. Due to this cancellation effect, uncertainties in the offline efficiency scale factors tend to dominate over the uncertainties in the online conditional efficiency scale factors in physics analysis. Furthermore, uncertainties on the offline efficiency scale factors tend to be dominated by uncertainties on the light-flavour jet backgrounds, which have a smaller impact on the conditional efficiency scale factors where the tighter requirements on the denominator mean that the contamination from light-flavour jets is corre-  , (b, d) for the online MV2c20 algorithm for the 60% efficiency working point (a, b) and the 85% efficiency working point (c, d) as measured in 2016 data. Offline jets are required to pass the same efficiency working points as the online jets, but using the DL1r b-tagging algorithm. Vertical error bars include data statistical uncertainties only, whereas the full band corresponds to the sum in quadrature of all uncertainties spondingly smaller. Since the online conditional efficiency measurements use a subset of the dataset used for the offline efficiency measurements, and the same systematic variations are considered for both, uncertainties in the two sets of scale factors are correlated. Few data events satisfy all of the selection criteria described in Sect. 8.1 at very high jet p T , and the statistical uncertainties associated with the results are largest in this region.
For the online-only efficiencies with the tightest working points, the scale of the systematic uncertainty approaches that of the statistical uncertainty. In these cases the total uncertainty on the efficiency measurements reaches 10% in some bins, while the total uncertainty on the scale factors can be up to 20% in some bins. The total uncertainties on the efficiencies and scale factors for looser working points are typically less than 5%. The largest source of systematic uncertainty comes from the modelling of top-quark events, in particular the impact of using a different parton shower and hadronisation model for simulated tt events. This uncertainty was evaluated by comparing the nominal tt sample with another event sample configured with the same setup to produce the matrix elements, but interfaced with Herwig 7.04 [69,70], using the H7UE set of tuned parameters [70] and the MMHT2014lo PDF set [71]. All other systematic uncertainties have a very small impact.
A method for reducing the total number of uncertainties while preserving the bin-by-bin correlations is provided for use in physics analyses by performing an eigenvector decomposition. Versions of the scale factors that have been smoothed in jet p T are also provided in order to prevent distortions in the variables of interest induced by the application of the scale factors. Both the eigenvector decomposition and the smoothing procedure are applied using the method described in Ref. [61].
Conditional efficiencies and scale factors are also provided for jets b-tagged offline with the MV2c10 algorithm, using the same method, but not presented in this work as the MV2c10 algorithm is now superseded by DL1r. As expected, the conditional efficiencies are up to a few percent higher and the uncertainties are slightly reduced for the tightest combinations of working points when MV2c10 rather than DL1r is used offline, due to increased correlation between the online and offline b-tagging algorithms. Any decrease in the degree of correlation between the taggers when moving  , (b, d) for the online MV2c10 algorithm for the 60% efficiency working point (a, b) and the 85% efficiency working point (c, d) as measured in 2017 data. Offline jets are required to pass the same efficiency working points as the online jets, but using the DL1r b-tagging algorithm. Vertical error bars include data statistical uncertainties only, whereas the full band corresponds to the sum in quadrature of all uncertainties from MV2c10 to DL1r for offline b-tagging is more than compensated for in analyses by the improved performance that DL1r offers.
The b-jet trigger conditional efficiency as a function of pile-up is shown for data and simulated tt events in Fig. 15. Offline jets are required to pass the 70% efficiency working point of the DL1r algorithm. Only the statistical uncertainties (which dominate the measurement) are included. The MV2 and DL1r b-tagging algorithms are tuned to provide constant efficiency under conditions of increasing pile-up. Consequently, the conditional efficiencies typically fall by less than 5% over the full range of pile-up conditions for all combinations of online and offline working points, although drops of up to 10% are observed in events with the least pile-up in 2016 data.

Muon-jet triggers
Approximately 20% of b-jets contain a muon from the decay chain of the b-hadron. These muons are typically soft and produced at small angles relative to the axis of the jet (typically within R = 0.5). The low p T of these leptons plus the additional hadronic activity around them mean that they cannot be triggered on using the standard ATLAS lepton triggers [15] which include isolation requirements for all but the highest-E T items, in order to reject fake-lepton backgrounds. Dedicated triggers are therefore designed to select lowp T muons that are geometrically matched to a jet -a 'muon-jet'. Requiring the presence of a muon-jet in the event increases the rejection power against light-flavour jet backgrounds and allows these semileptonic b-jet triggers to reach lower in jet E T than the standard b-jet triggers.
Muon-jet triggers are used to provide a sample of b-jetenriched data used to calibrate the b-tagging algorithms used offline, and also have potential to enhance the acceptance efficiency for processes containing a large number of b-jets and/or ones with low p T (described in Sect. 9.2). They also provide the only way to select events containing b-jets during lead-ion collision runs, where events typically have a large number of jets and high track multiplicity, and running the

Muon-jet triggers for heavy-ion collisions
One of the open questions regarding the quark-gluon plasma (QGP) created in heavy-ion (HI) collisions at the LHC is the energy loss mechanisms that partons experience while traversing the hot and dense QCD medium [72]. Heavy quarks are produced at the early stages of the ion collisions in scattering processes that involve large momentum transfers, Q, so their formation time, of the order 1/Q < 0.1 fm/c, is much smaller than the lifetime of the QGP, estimated to be 10-11 fm/c at the LHC [73]. The energy loss of heavy quarks in the QGP is predicted to be smaller than that of light-flavour quarks, due to the gluon radiation suppression at small angles -the so called 'dead cone' effect [9].
In 2018, ATLAS collected 1.42 nb −1 of data from collisions of lead ions with a nucleon-nucleon centre-of-mass energy √ s NN = 5.02 TeV. Dedicated triggers were necessary not only to fulfil the specific physics requirements, but also to accommodate the different detector environment during Pb+Pb data-taking, resulting from the intrinsic geometry of the nuclear overlap leading to large variations of both track multiplicity and energy density, compared with pp runs. During Pb+Pb data-taking, it would be prohibitive to run the b-jet triggers developed for pp collisions, owing to the high rates and large CPU cost of triggering in the relevant jet E T range. Muon-jet triggers that require a muon and jet that are geometrically matched within R < 0.5 are used instead to provide a sample of data events that are enriched in semileptonic b-hadron decays. Several different muon-jet triggers imposing various combinations of muon p T and jet E T thresholds were provided. In most cases these were seeded at L1 by a single muon with p T > 4 or 6 GeV, although in one instance a L1 jet was additionally required. In the HLT, a muon with p T > 4 or 6 GeV within R = 0.5 of a jet with E T > 30, 40, 50, or 60 GeV was required. Jets were reconstructed using the anti-k t algorithm with radius parameter R = 0.2, 0.3 or 0.4, and corrected for the underlying event produced in heavy-ion collisions, as detailed in Ref. [74]. The list of triggers was designed to be optimal within the allocated trigger acceptance rate of approximately 80 Hz and is summarised  Table 10. In order to accommodate the increasing instantaneous luminosity during the data-taking period and ensure that that output rate remained within the rate allocation, the set of triggers that required a muon with p T > 4 GeV and applied no additional jet requirements at L1 were prescaled for some runs. The prescale factors were applied coherently to all of the triggers and the values ranged from 1.0 (i.e. unprescaled) to 1.307. The average prescale factor across the entire Pb+Pb data-taking period in 2018 was 1.065.
The HLT conditional muon-jet trigger efficiency is defined as the number of offline muon-jet objects satisfying the muon-jet trigger requirements, divided by the total number of offline muon-jets that fired a single-muon trigger with the same p T threshold as the muon-jet trigger used in the numerator. The offline muon-jets are constructed from muons passing a 'Tight' identification working point (corresponding to an efficiency of approximately 90% for true muons in MC events) [66], matched to a jet within a distance of R < 0.5. Both the muon and the jet are required to have |η| < 2.4. The conditional muon-jet trigger efficiency for a given HLT muon-jet trigger is therefore defined as where N Trig+Off μj is the number of muon-jet objects passing the HLT and offline muon-jet selections, and N Trig+Off μ is the number of muon-jets passing the HLT and offline muon requirements.
The events passing the muon-jet trigger are an exact subset of events that pass the single-muon trigger with the same p T threshold and so the absolute muon-jet trigger efficiency can be defined as the product of the conditional trigger efficiency given in Eq. (1), and the single-muon trigger efficiency (ε μ ) which was measured using the method described in Ref. [75]: (2) The performance of the muon-jet trigger is constrained by the limited acceptance of the L1 trigger, based on the information received from the calorimeters and muon trigger chambers. The geometric coverage of the latter is ∼99% in the endcap (1.05 < |η| < 2.40) regions and ∼80% in the barrel region (|η| < 1.05) [65]. The measurements are there- fore made separately in the two pseudorapidity ranges. The efficiency is also measured for different categories of collision centrality, in order to account for a possible decrease in performance due to the characteristics of Pb+Pb collisions. The centrality of a collision is assessed on an event-by-event basis using the E T deposited in the forward calorimeters, E FCal T in 3.2 ≤ |η| < 4.9. The Glauber MC model [76] is used to obtain a correspondence between the E FCal T and the sampling fraction of the total inelastic Pb+Pb cross-section, allowing centrality percentiles to be set [77]. In this analysis, central collisions are defined as those in the 0-40% centrality interval where the contribution from underlying-event effects is the largest. Peripheral collisions are those within the 40-80% centrality interval.
The performance of muon-jet triggers where the muon p T threshold is 4 GeV and the muon must be within R = 0.5 of a jet passing an E T threshold of 40, 50 or 60 GeV is presented relative to the single-muon trigger that requires a muon with p T > 4 GeV at L1 and in the HLT. 6 The efficiency of this single-muon trigger was measured in Ref. [75] to be approximately 80% and 85% in the barrel region, for central and peripheral collisions, respectively. This low efficiency is a consequence of the lower acceptance of the L1 trigger. In the endcap region the efficiency is noticeably higher, reaching 97%, and less sensitive to the centrality of the collision. Figure 16 compares the efficiency of the three muon-jet triggers as a function of the offline jet p T for events passing the single-muon trigger and containing an offline muon with p T > 12 GeV. In peripheral collisions and in the barrel region the efficiency is above 99% for offline jets with p T larger than 46, 59, and 66 GeV (for triggers with 40, 50, and 60 GeV jet E T thresholds, respectively). The efficiency saturates at slightly higher jet p T values in the endcap region. In central collisions the turn-on is slower than in peripheral collisions and the range with full efficiency starts at higher p T values. This sensitivity to the centrality of the collisions is also observed in inclusive jet trigger efficiency measurements. Figure 17 shows the two-dimensional absolute trigger efficiency, as defined in Eq. (2), for a muon-jet trigger requiring a muon with p T > 4 GeV and jet with E T > 40 GeV, and is shown as a function of the offline muon p T and jet p T . The efficiency of this trigger reaches a maximum for offline jet p T 60 GeV but does not reach 100% in most regions. This lower efficiency, particularly in the barrel region, compared with the conditional efficiency shown in Fig. 16, reflects the inefficiency of the muon trigger.
The fraction of selected jets that originate from a b-hadron decay is determined using a template fit to the p rel T distribution, defined as the transverse momentum of the muon with respect to the combined muon-jet axis. Muons from a bhadron decay typically have a harder p rel T spectrum than those originating from light-flavour or c-jets. A template for the p rel T distribution of light-flavour jets is derived from a lightflavour-enriched data sample, obtained by selecting pairs of tracks and jets from data collected with a single-jet trigger with an E T threshold of 50 GeV in the HLT and using the method described in Ref. [78]. Templates for the b-jet and c-jet distributions are obtained from simulated di-jet events, generated with Pythia 8.230 [29] using the NNPDF2.3lo PDF set and the A14 [30] set of tuned parameters. The samples are filtered to require one muon with p T > 3 GeV at generator level. The templates for b-, c-, and light-flavour jets are then fitted to a data sample selected using a muon-jet trigger that requires a muon with p T > 4 GeV and a jet with R = 0.4 and E T > 40 GeV that is within R = 0.5 of the muon in the HLT. Offline muons are required to have p T > 4 GeV, |η| < 2.4, to pass the 'Tight' identification working point [66], and to be geometrically matched to the trigger muon within R < 0.01. Offline jets are reconstructed with radius parameter R = 0.2, and are required to have |η| < 2.1, be within R < 0.2 of the muon, and have p T > 58 GeV (where this minimum p T requirement is determined by the point where the trigger is 99% efficient). The templates for b-, c-, and light-flavour jets are then fitted to the data sample. In this way, the relative fraction of each jet flavour is determined and a b-jet fraction of 30%-35% is measured, with no centrality-dependence observed. Comparable b-fractions were measured when repeating the analysis on 48.8 pb −1 of pp collision data with √ s = 5.02 TeV collected during a special run in 2017, and using offline jets with radii of 0.2 and 0.4. The muon and jet are estimated to come from different hard scatter interactions in 1% of events in central Pb+Pb collisions that produce the most demanding environments. This fake-rate is smaller in peripheral collisions.

Muon-jet triggers for proton-proton collisions
Triggers with similar design and thresholds to those detailed in Table 10 were run prescaled during pp collision data-taking in order to collect a sample of data enriched with bb decays that are used to calibrate the offline flavour-tagging algorithms. In these cases, muon-jet triggers are seeded from either a single-muon or a muon-plus-jet requirement at L1. In the HLT, muons are required to satisfy R(μ, jet) < 0.5 and z(μ, jet) < 2 mm (where the z-position of the jet is taken to be the primary-vertex z-position), in order to be considered as 'matched' to a jet.
Muon-jet triggers to select interesting physics processes were also provided during 2016 data-taking, but were discontinued due to their prohibitively large CPU cost. For these triggers it was desirable to exploit other characteristic features of the process of interest, for example by placing additional requirements on the multiplicity, E T , and b-tagging weight of other jets in the event. In these cases, only jets that failed the matching requirements with the muon were considered for further processing (e.g. b-tagging) by the b-jet trigger (a) (b) Fig. 17 The two-dimensional absolute trigger efficiency, ε μj , as defined in Eq.
(2), of the muon-jet trigger that requires a muon with p T > 4 GeV and a jet with E T > 40 GeV, as a function of the offline muon p T and jet p T . The measurements are performed inclu-sively across the full collision centrality range (0-80%) for the a barrel and b endcap regions. The last bins of both the x-and y-axes contain overflow software. The muon-jet can therefore form one component of a more complex trigger, for example by requiring that an event contains some combination of muon-jet(s), b-tagged jet(s), untagged (light-flavour) jet(s), or any other object that ATLAS is able to trigger on. These muon-jet triggers have the potential to be beneficial for analyses using pp collision data that have large b-jet multiplicity (e.g. H H → bbbb), and/or for those that only have lowp T b-jets, e.g. bφ(φ → bb).

Summary
ATLAS has successfully operated b-jet triggers throughout Runs 1 and 2 of the LHC. The b-jet trigger software was completely redesigned during the long shutdown period that followed Run 1, was validated during 2015 data-taking, and became fully operational in 2016. The software uses a twostage approach to improve primary-vertex finding and ensure stability under increasingly harsh pile-up conditions, and deploys state-of-the-art offline b-tagging algorithms in the HLT. These changes, together with improved tracking performance in the trigger and the installation of the insertable Blayer for Run 2, lead to significantly improved performance compared to Run 1. Light-flavour jet rejection was improved by an order of magnitude for the same b-jet selection efficiency in 2016 compared with the b-jet triggers used in Run 1. An additional factor of ∼1.5 in light-flavour jet rejection was achieved in 2017 and 2018 by further optimising the use of the MV2 algorithm in the HLT, while simultaneously reoptimising the software to reduce the total CPU processing time by ∼30%. These improvements allowed ATLAS to maintain the E T thresholds and b-tagging working points of b-jet triggers throughout Run 2, in spite of the increasingly harsh pile-up conditions. The same likelihood-based method that is used to calibrate the offline b-tagging algorithms in ATLAS is adapted for use with the b-jet triggers for the first time. Conditional efficiencies are measured in data and evaluated in simulation for different combinations of online and offline working points for each year of data-taking (2016-2018). The conditional efficiencies are typically in the range 85%-97%, depending on the combination of working points considered. Good agreement of MC simulation with data is generally observed, and scale factors are provided to correct the simulation to data. The use of the likelihood method provides a substantial reduction in uncertainties compared to the geometrical matching approaches used previously, enabling the conditional efficiencies to be measured with a typical accuracy of a few percent.
Specially designed b-jet triggers were also deployed for the first time during Pb+Pb data-taking in 2018, by adapting the b-jet trigger software to identify semileptonic b-hadron decays by selecting muons geometrically matched to a jet. These triggers reach an efficiency of > 99% with respect to both the single muon trigger and offline requirements above the jet E T turn-on region, and provide a mechanism to study the flavour-dependence of radiative quark energy loss in the quark-gluon plasma, where the busy detector environment made it unfeasible to run the standard b-jet triggers.  [79].

Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: "All ATLAS scientific output is published in journals, and preliminary results are made available in Conference Notes. All are openly available, without restriction on use by external parties beyond copyright law and the standard conditions agreed by CERN. Data associated with journal publications are also made available: tables and data from plots (e.g. cross section values, likelihood profiles, selection efficiencies, cross section limits, ...) are stored in appropriate repositories such as HEPDATA (http:// hepdata.cedar.ac.uk/). ATLAS also strives to make additional material related to the paper available that allows a reinterpretation of the data in the context of new theoretical models. For example, an extended encapsulation of the analysis is often provided for measurements in the framework of RIVET (http://rivet.hepforge.org/)." This information is taken from the ATLAS Data Access Policy, which is a public document that can be downloaded from http://opendata.cern.ch/record/413 [opendata.cern.ch].] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .