Performance of the ATLAS Trigger System in 2010

Proton–proton collisions at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\sqrt{s}=7$\end{document} TeV and heavy ion collisions at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\sqrt{s_{NN}}=2.76$\end{document} TeV were produced by the LHC and recorded using the ATLAS experiment’s trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presented.


Introduction
ATLAS [1] is one of two general-purpose experiments recording LHC [2] collisions to study the Standard Model (SM) and search for physics beyond the SM. The LHC is designed to operate at a centre of mass energy of √ s = 14 TeV in proton-proton (pp) collision mode with an instantaneous luminosity L = 10 34 cm −2 s −1 and at √ s NN = 2.76 TeV in heavy-ion (PbPb) collision mode with L = 10 31 cm −2 s −1 . The LHC started single-beam operation in 2008 and achieved first collisions in 2009. During a prolonged period of pp collision operation in 2010 at √ s = 7 TeV, ATLAS collected 45 pb −1 of data with luminosities ranging from 10 27 cm −2 s −1 to 2 × 10 32 cm −2 s −1 . The pp running was followed by a short period of heavy ion running at √ s NN = 2.76 TeV in which ATLAS collected 9.2 µb −1 of PbPb collisions.
Focusing mainly on the pp running, the performance of the ATLAS trigger system during 2010 LHC operation is presented in this paper. The ATLAS trigger system is e-mail: atlas.publications@cern.ch designed to record events at approximately 200 Hz from the LHC's 40 MHz bunch crossing rate. The system has three levels; the first level (L1) is a hardware-based system using information from the calorimeter and muon subdetectors, the second (L2) and third (Event Filter, EF) levels are software-based systems using information from all subdetectors. Together, L2 and EF are called the High Level Trigger (HLT).
For each bunch crossing, the trigger system verifies if at least one of hundreds of conditions (triggers) is satisfied. The triggers are based on identifying combinations of candidate physics objects (signatures) such as electrons, photons, muons, jets, jets with b-flavour tagging (b-jets) or specific B-physics decay modes. In addition, there are triggers for inelastic pp collisions (minbias) and triggers based on global event properties such as missing transverse energy (E miss T ) and summed transverse energy ( E T ). In Sect. 2, following a brief introduction to the ATLAS detector, an overview of the ATLAS trigger system is given and the terminology used in the remainder of the paper is explained. Section 3 presents a description of the trigger system commissioning with cosmic rays, single-beams, and collisions. Section 4 provides a brief description of the L1 trigger system. Section 5 introduces the reconstruction algorithms used in the HLT to process information from the calorimeters, muon spectrometer, and inner detector tracking detectors. The performance of the trigger signatures, including rates and efficiencies, is described in Sect. 6. Section 7 describes the overall performance of the trigger system. The plans for the trigger system operation in 2011 are described in Sect. 8.

Overview
The ATLAS detector [1] shown in Fig. 1, has a cylindrical geometry 1 which covers almost the entire solid angle around the nominal interaction point. Owing to its cylindrical geometry, detector components are described as being part of the barrel if they are in the central region of pseudorapidity or part of the end-caps if they are in the forward regions. The ATLAS detector is composed of the following sub-detectors: Inner detector: The Inner Detector tracker (ID) consists of a silicon pixel detector nearest the beam-pipe, surrounded by a SemiConductor Tracker (SCT) and a Transition Radiation Tracker (TRT). Both the Pixel and SCT cover the region |η| < 2.5, while the TRT covers |η| < 2. The ID is contained in a 2 Tesla solenoidal magnetic field. Although not used in the L1 trigger system, tracking information is a key ingredient of the HLT. Calorimeter: The calorimeters cover the region |η| < 4. 9 and consist of electromagnetic (EM) and hadronic (HCAL) calorimeters. The EM, Hadronic End-Cap (HEC) and Forward Calorimeters (FCal) use a Liquid Argon and absorber technology (LAr). The central hadronic calorimeter is based on steel absorber interleaved with plastic scintillator (Tile). A presampler is installed in front of the EM calorimeter for |η| < 1.8. There are two separate readout paths: one with coarse granularity (trigger towers) used the beam pipe, such that pseudorapidity η ≡ − ln(tan θ 2 ). The positive x-axis is defined as pointing from the interaction point towards the centre of the LHC ring and the positive y-axis is defined as pointing upwards. The azimuthal degree of freedom is denoted φ. by L1, and one with fine granularity used by the HLT and offline reconstruction. Muon spectrometer: The Muon Spectrometer (MS) detectors are mounted in and around air core toroids that generate an average field of 0.5 T in the barrel and 1 T in the endcap regions. Precision tracking information is provided by Monitored Drift Tubes (MDT) over the region |η| < 2.7 (|η| < 2.0 for the innermost layer) and by Cathode Strip Chambers (CSC) in the region 2 < |η| < 2.7. Information is provided to the L1 trigger system by the Resistive Plate Chambers (RPC) in the barrel (|η| < 1.05) and the Thin Gap Chambers (TGC) in the end-caps (1.05 < |η| < 2.4). Specialized detectors: Electrostatic beam pick-up devices (BPTX) are located at z = ±175 m. The Beam Conditions Monitor (BCM) consists of two stations containing diamond sensors located at z = ±1.84 m, corresponding to |η| 4.2. There are two forward detectors, the LUCID Cerenkov counter covering 5.4 < |η| < 5.9 and the Zero Degree Calorimeter (ZDC) covering |η| > 8. 3. The Minimum Bias Trigger Scintillators (MBTS), consisting of two scintillator wheels with 32 counters mounted in front of the calorimeter end-caps, cover 2.1 < |η| < 3.8.
When operating at the design luminosity of 10 34 cm −2 s −1 the LHC will have a 40 MHz bunch crossing rate, with an average of 25 interactions per bunch crossing. The purpose of the trigger system is to reduce this input rate to an output rate of about 200 Hz for recording and offline processing. This limit, corresponding to an average data rate of ∼300 MB/s, is determined by the computing resources for offline storage and processing of the data. It is possible to record data at significantly higher rates for short periods of time. For example, during 2010 running there were physics benefits from running the trigger system with output rates of up to ∼600 Hz. During runs with instantaneous luminosity ∼10 32 cm −2 s −1 , the average event size was ∼1.3 MB.
A schematic diagram of the ATLAS trigger system is shown in Fig. 2. Detector signals are stored in front-end pipelines pending a decision from the L1 trigger system. In order to achieve a latency of less than 2.5 µs, the L1 trigger system is implemented in fast custom electronics. The L1 trigger system is designed to reduce the rate to a maximum of 75 kHz. In 2010 running, the maximum L1 rate did not exceed 30 kHz. In addition to performing the first selection step, the L1 triggers identify Regions of Interest (RoIs) within the detector to be investigated by the HLT.
The HLT consists of farms of commodity processors connected by fast dedicated networks (Gigabit and 10 Gigabit Ethernet). During 2010 running, the HLT processing farm consisted of about 800 nodes configurable as either L2 or EF and 300 dedicated EF nodes. Each node consisted of eight processor cores, the majority with a 2.4 GHz clock speed. The system is designed to expand to about 500 L2 nodes and 1800 EF nodes for running at LHC design luminosity. When an event is accepted by the L1 trigger (referred to as an L1 accept), data from each detector are transferred to the detector-specific Readout Buffers (ROB), which store the event in fragments pending the L2 decision. One or more ROBs are grouped into Readout Systems (ROS) which are connected to the HLT networks. The L2 selection is based on fast custom algorithms processing partial event data within the RoIs identified by L1. The L2 processors request data from the ROS corresponding to detector elements inside each RoI, reducing the amount of data to be transferred and processed in L2 to 2-6% of the total data volume. The L2 triggers reduce the rate to ∼3 kHz with an average processing time of ∼40 ms/event. Any event with an L2 processing time exceeding 5 s is recorded as a timeout event. During runs with instantaneous luminosity ∼10 32 cm −2 s −1 , the average processing time of L2 was ∼50 ms/event (Sect. 7).
The Event Builder assembles all event fragments from the ROBs for events accepted by L2, providing full event information to the EF. The EF is mostly based on offline algorithms invoked from custom interfaces for running in the trigger system. The EF is designed to reduce the rate to ∼200 Hz with an average processing time of ∼4 s/event. Any event with an EF processing time exceeding 180 s is recorded as a timeout event. During runs with instantaneous luminosity ∼10 32 cm −2 s −1 , the average processing time of EF was ∼0.4 s/event (Sect. 7). Data for events selected by the trigger system are written to inclusive data streams based on the trigger type. There are four primary physics streams, Egamma, Muons, JetTauEtmiss, MinBias, plus several additional calibration streams. Overlaps and rates for these streams are shown in Sect. 7. About 10% of events are written to an express stream where prompt offline reconstruction provides calibration and Data Quality (DQ) information prior to the reconstruction of the physics streams. In addition to writing complete events to a stream, it is also possible to write partial information from one or more sub-detectors into a stream. Such events, used for detector calibration, are written to the calibration streams.
The trigger system is configured via a trigger menu which defines trigger chains that start from a L1 trigger and specify a sequence of reconstruction and selection steps for the specific trigger signatures required in the trigger chain. A trigger chain is often referred to simply as a trigger. Figure 3 shows an illustration of a trigger chain to select electrons. Each chain is composed of Feature Extraction (FEX) Table 1 The key trigger objects, the shortened names used to represent them in the trigger menu at L1 and the HLT, and the L1 thresholds used for each trigger signature in the menu at L = 10 32 cm −2 s −1 . Thresholds are applied to E T for calorimeter triggers and p T for muon triggers algorithms which create the objects (like calorimeter clusters) and Hypothesis (HYPO) algorithms that apply selection criteria to the objects (e.g. transverse momentum greater than 20 GeV). Caching in the trigger system allows features extracted from one chain to be re-used in another chain, reducing both the data access and processing time of the trigger system. Approximately 500 triggers are defined in the current trigger menus. Table 1 shows the key physics objects identified by the trigger system and gives the shortened representation used in the trigger menus. Also shown are the L1 thresholds applied to transverse energy (E T ) for calorimeter triggers and transverse momentum (p T ) for muon triggers. The menu is composed of a number of different classes of trigger: Single object triggers: used for final states with at least one characteristic object. For example, a single muon trigger with a nominal 6 GeV threshold is referred to in the trigger menu as mu6. Multiple object triggers: used for final states with two or more characteristic objects of the same type. For example, di-muon triggers for selecting J /ψ → μμ decays. Triggers requiring a multiplicity of two or more are indicated in the trigger menu by perpending the multiplicity to the trigger name, as in, 2mu6. Combined triggers: used for final states with two or more characteristic objects of different types. For example, a 13 GeV muon plus 20 GeV missing transverse energy (E miss T ) trigger for selecting W → μν decays would be denoted mu13_xe20.
Topological triggers: used for final states that require selections based on information from two or more RoIs. For example the J /ψ → μμ trigger combines tracks from two muon RoIs. When referring to a particular level of a trigger, the level (L1, L2 or EF) appears as a prefix, so L1_MU6 refers to the L1 trigger item with a 6 GeV threshold and L2_mu6 refers to the L2 trigger item with a 6 GeV threshold. A name without a level prefix refers to the whole trigger chain. Trigger rates can be controlled by changing thresholds or applying different sets of selection cuts. The selectivity of a set of cuts applied to a given trigger object in the menu is represented by the terms loose, medium, and tight. This selection criteria is suffixed to the trigger name, for example e10_medium. Additional requirements, such as isolation, can also be imposed to reduce the rate of some triggers. Isolation is a measure of the amount of energy or number of particles near a signature. For example, the amount of transverse energy (E T ) deposited in the calorimeter within R ≡ ( η) 2 + ( φ) 2 < 0.2 of a muon is a measure of the muon isolation. Isolation is indicated in the trigger menu by an i appended to the trigger name (capital I for L1), for example L1_EM20I or e20i_tight. Isolation was not used in any primary triggers in 2010 (see below).
Prescale factors can be applied to each L1 trigger and each HLT chain, such that only 1 in N events passing the trigger causes an event to be accepted at that trigger level. Prescales can also be set so as to disable specific chains. Prescales control the rate and composition of the express stream. A series of L1 and HLT prescale sets, covering a range of luminosities, are defined to accompany each menu. These prescales are auto-generated based on a set of rules that take into account the priority for each trigger within the following categories: Primary triggers: principal physics triggers, which should not be prescaled. Supporting triggers: triggers important to support the primary triggers, e.g. orthogonal triggers for efficiency measurements or lower E T threshold, prescaled versions of primary triggers. Monitoring and calibration triggers: to collect data to ensure the correct operation of the trigger and detector, including detector calibrations.
Prescale changes are applied as luminosity drops during an LHC fill, in order to maximize the bandwidth for physics, while ensuring a constant rate for monitoring and calibration triggers. Prescale changes can be applied at any point during a run at the beginning of a new luminosity block (LB). A luminosity block is the fundamental unit of time for the luminosity measurement and was approximately 120 seconds in 2010 data-taking.
Further flexibility is provided by defining bunch groups, which allow triggers to include specific requirements on the LHC bunches colliding in ATLAS. These requirements include paired (colliding) bunches for physics triggers and empty bunches for cosmic ray, random noise and pedestal triggers. More complex schemes are possible, such as requiring unpaired bunches separated by at least 75 ns from any bunch in the other beam.

Datasets used for performance measurements
During 2010 the LHC delivered a total integrated luminosity of 48.1 pb −1 to ATLAS during stable beams in √ s = 7 TeV pp collisions, of which 45 pb −1 was recorded. Unless otherwise stated, the analyses presented in this publication are based on the full 2010 dataset. To ensure the quality of data, events are required to pass data quality (DQ) conditions that include stable beams and good status for the relevant detectors and triggers. The cumulative luminosities delivered by the LHC and recorded by ATLAS are shown as a function of time in Fig. 4.
In order to compare trigger performance between data and MC simulation, a number of MC samples were generated. The MC samples used were produced using the PYTHIA [3] event generator with a parameter set [4] tuned to describe the underlying event and minimum bias data from Tevatron measurements at 0.63 TeV and 1.8 TeV. The generated events were processed through a GEANT4 [5] based simulation of the ATLAS detector [6].
In some cases, where explicitly mentioned, performance results are shown for a subset of the data corresponding to  a specific period of time. The 2010 run was split into datataking periods; a new period being defined when there was a significant change in the detector conditions or instantaneous luminosity. The data-taking periods are summarized in Table 2. The rise in luminosity during the year was accompanied by an increase in the number of proton bunches injected into the LHC ring. From the end of September (Period G onwards) the protons were injected in bunch trains each consisting of a number of proton bunches separated by 150 ns.

Commissioning
In this section, the steps followed to commission the trigger are outlined and the trigger menus employed during the commissioning phase are described. The physics trigger menu, deployed in July 2010, is also presented and the evolution of the menu during the subsequent 2010 data-taking period is described.

Early commissioning
The commissioning of the ATLAS trigger system started before the first LHC beam using cosmic ray events and, to commission L1, test pulses injected into the detector frontend electronics. To exercise the data acquisition system and HLT, simulated collision data were inserted into the ROS and processed through the whole online chain. This procedure provided the first full-scale test of the HLT selection software running on the online system. The L1 trigger system was exercised for the first time with beam during single beam commissioning runs in 2008. Some of these runs included so-called splash events for which the proton beam was intentionally brought into collision with the collimators upstream from the experiment in order to generate very large particle multiplicities that could be used for detector commissioning. During this short period of single-beam data-taking, the HLT algorithms were tested offline.
Following the single beam data-taking in 2008, there was a period of cosmic ray data-taking, during which the HLT algorithms ran online. In addition to testing the selection algorithms used for collision data-taking, triggers specifically developed for cosmic ray data-taking were included. The latter were used to select and record a very large sample of cosmic ray events, which were invaluable for the commissioning and alignment of the detector sub-systems such as the inner detector and the muon spectrometer [7].

Commissioning with colliding beams
Specialized commissioning trigger menus were developed for the early collision running in 2009 and 2010. These menus consisted mainly of L1-based triggers since the initial low interaction rate, of the order of a few Hz, allowed all events passing L1 to be recorded. Initially, the L1 MBTS trigger (Sect. 6.1) was unprescaled and acted as the primary physics trigger, recording all interactions. Once the luminosity exceeded ∼2 × 10 27 cm −2 s −1 , the L1 MBTS trigger was prescaled and the lowest threshold muon and calorimeter triggers became the primary physics triggers. With further luminosity increase, these triggers were also prescaled and higher threshold triggers, which were included in the commissioning menus in readiness, became the primary physics triggers. A coincidence with filled bunch crossing was required for the physics triggers. In addition, the menus contained non-collision triggers which required a coincidence with an empty or unpaired bunch crossing. For most of the lowest threshold physics triggers, a corresponding noncollision trigger was included in the menus to be used for background studies. The menus also contained a large number of supporting triggers needed for commissioning the L1 trigger system.
In the commissioning menus, event streaming was based on the L1 trigger categories. Three main inclusive physics streams were recorded: L1Calo for calorimeter-based triggers, L1Muon for triggers coming from the muon system and L1MinBias for events triggered by minimum bias detectors such as MBTS, LUCID and ZDC. In addition to these L1-based physics streams, the express stream was also recorded. Its content evolved significantly during the first weeks of data-taking. In the early data-taking, it comprised a random 10-20% of all triggered events in order to exercise the offline express stream processing system. Subsequently, the content was changed to enhance the proportion of electron, muon, and jet triggers. Finally, a small set of triggers of each trigger type was sent to the express stream. For each individual trigger, the fraction contributing to the express stream was adjustable by means of dedicated prescale values. The use of the express stream for data quality assessment and for calibration prior to offline reconstruction of the physics streams was commissioned during this period.

HLT commissioning
The HLT commissioning proceeded in several steps. During the very first collision data-taking at √ s = 900 GeV in 2009, no HLT algorithms were run online. Instead they were exercised offline on collision events recorded in the express stream. Results were carefully checked to confirm that the trigger algorithms were functioning correctly and the algorithm execution times were evaluated to verify that timeouts would not occur during online running.
After a few days of running offline, and having verified that the algorithms behaved as expected, the HLT algorithms were deployed online in monitoring mode. In this mode, the HLT algorithms ran online, producing trigger objects (e.g. calorimeter clusters and tracks) and a trigger decision at the HLT; however events were selected based solely on their L1 decision. Operating first in monitoring mode allowed each trigger to be validated before the trigger was put into active rejection mode. Recording the HLT objects and decision in each event allowed the efficiency of each trigger chain to be measured with respect to offline reconstruction. In addition a rejection factor, defined as input rate over output rate, could be evaluated for each trigger chain at L2 and EF. Running the HLT algorithms online also allowed the online trigger monitoring system to be exercised and commissioned under real circumstances.
Triggers can be set in monitoring or active rejection mode individually. This important feature allowed individual trig- gers to be put into active rejection mode as luminosity increased and trigger rates exceeded allocated maximum values. The first HLT trigger to be enabled for active rejection was a minimum bias trigger chain (mbSpTrk) based on a random bunch crossing trigger at L1 and an ID-based selection on track multiplicity at the HLT (Sect. 6.1). This trigger was already in active rejection mode in 2009. Figure 5 illustrates the enabling of active HLT rejection during the first √ s = 7 TeV collision run, in March 2010. Since the HLT algorithms were disabled at the start of the run, the L1 and EF trigger rates were initially the same. The HLT algorithms were turned on, following rapid validation from offline processing, approximately two hours after the start of collisions, at about 15:00. All trigger chains were in monitoring mode apart from the mbSpTrk chain, which was in active rejection mode. However the random L1 trigger that forms the input to the mbSpTrk chain was disabled for the first part of the run and so the L1 and EF trigger rates remained the same until around 15:30 when this random L1 trigger was enabled. At this time there was a significant increase in the L1 rate, but the EF trigger rate stayed approximately constant due to the rejection by the mbSpTrk chain.
During the first months of 2010 data-taking, the LHC peak luminosity increased from 10 27 cm −2 s −1 to 10 29 cm −2 s −1 . This luminosity was sufficiently low to allow the HLT to continue to run in monitoring mode and trigger rates were controlled by applying prescale factors at L1. Once the peak luminosity delivered by the LHC reached 1.2 × 10 29 cm −2 s −1 , it was necessary to enable HLT rejection for the highest rate L1 triggers. As luminosity progressively increased, more triggers were put into active rejection mode.
In addition to physics and commissioning triggers, a set of HLT-based calibration chains were also activated to produce dedicated data streams for detector calibration and  Table 3 lists the main calibration streams. These contain partial event data, in most cases data fragments from one sub-detector, in contrast to the physics streams which contain information from the whole detector.

Physics trigger menu
The end of July 2010 marked a change in emphasis from commissioning to physics. A physics trigger menu was deployed for the first time, designed for luminosities from 10 30 cm −2 s −1 to 10 32 cm −2 s −1 . The physics trigger menu continued to evolve during 2010 to adapt to the LHC conditions. In its final form, it consisted of more than 470 triggers, the majority of which were primary and supporting physics triggers.
In the physics menu, L1 commissioning items were removed, allowing for the addition of higher threshold physics triggers in preparation for increased luminosity. At the same time, combined triggers based on a logical "and" between two L1 items were introduced into the menu. Streaming based on the HLT decision was introduced and the corresponding L1-based streaming was disabled. In addition to calibration and express streams, data were recorded in the physics streams presented in Sect. 2. At the same time, preliminary bandwidth allocations were defined as guidelines for all trigger groups, as listed in Table 4.
The maximum instantaneous luminosity per day is shown in Fig. 6(a). As luminosity increased and the trigger rates approached the limits imposed by offline processing, primary and supporting triggers continued to evolve by progressively tightening the HLT selection cuts and by prescaling the lower E T threshold triggers. Table 5 shows the lowest unprescaled threshold of various trigger signatures for three luminosity values.
In order to prepare for higher luminosities, tools to optimize prescale factors became very important. For exam-  ple, the rate prediction tool uses enhanced bias data (data recorded with a very loose L1 trigger selection and no HLT selection) as input. Initially, these data were collected in dedicated enhanced bias runs using the lowest trigger thresholds, which were unprescaled at L1, and no HLT selection. Subsequently, enhanced bias triggers were added to the physics menu to collect the data sample during normal physics data-taking. Figure 7 shows a comparison between online rates at 10 32 cm −2 s −1 and predictions based on extrapolation from enhanced bias data collected at lower luminosity. In general online rates agreed with predictions within 10%. The biggest discrepancy was seen in rates from the JetTauEtmiss stream, as a result of the non-linear scaling of E miss T and E T trigger rates with luminosity, as shown later in Fig. 13. This non-linearity is due to in-time pile-up, defined as the effect of multiple pp interactions in a bunch cross- Profiles with respect to time of (a) the maximum instantaneous luminosity per day and (b) the peak mean number of interactions per bunch crossing (assuming a total inelastic cross section of 71.5 mb) recorded by ATLAS during stable beams in √ s = 7 TeV pp collisions. Both plots use the online luminosity measurement ing. The maximum mean number of interactions per bunch crossing, which reached 3.5 in 2010, is shown as a function of day in Fig. 6(b). In-time pile-up had the most significant effects on the E miss T , E T (Sect. 6.6), and minimum bias (Sect. 6.1) signatures. Out-of-time pile-up is defined as the effect of an earlier bunch crossing on the detector signals for the current bunch crossing. Out-of-time pile-up did not have a significant effect in the 2010 pp data-taking because the bunch spacing was 150 ns or larger.

Level 1
The Level 1 (L1) trigger decision is formed by the Central Trigger Processor (CTP) based on information from the calorimeter trigger towers and dedicated triggering layers in Fig. 7 Comparison of online rates (solid) with offline rate predictions (hashed) at luminosity 10 32 cm −2 s −1 for L1, L2, EF and main physics streams the muon system. An overview of the CTP, L1 calorimeter, and L1 muon systems and their performance follows. The CTP also takes input from the MBTS, LUCID and ZDC systems, described in Sect. 6.1.

Central trigger processor
The CTP [1,8] forms the L1 trigger decision by applying the multiplicity requirements and prescale factors specified in the trigger menu to the inputs from the L1 trigger systems. The CTP also provides random triggers and can apply specific LHC bunch crossing requirements. The L1 trigger decision is distributed, together with timing and control signals, to all ATLAS sub-detector readout systems.
The timing signals are defined with respect to the LHC bunch crossings. A bunch crossing is defined as a 25 ns time-window centred on the instant at which a proton bunch may traverse the ATLAS interaction point. Not all bunch crossings contain protons; those that do are called filled bunches. In 2010, the minimum spacing between filled bunches was 150 ns. In the nominal LHC configuration, there are a maximum of 3564 bunch crossings per LHC revolution. Each bunch crossing is given a bunch crossing identifier (BCID) from 0 to 3563. A bunch group consists of a numbered list of BCIDs during which the CTP generates an internal trigger signal. The bunch groups are used to apply specific requirements to triggers such as paired (colliding) bunches for physics triggers, single (one-beam) bunches for background triggers, and empty bunches for cosmic ray, noise and pedestal triggers.

Dead-time
Following an L1 accept the CTP introduces dead-time, by vetoing subsequent triggers, to protect front-end readout buffers from overflowing. This preventive dead-time mechanism limits the minimum time between two consecutive L1 accepts (simple dead-time), and restricts the number of L1 accepts allowed in a given period (complex dead-time). In 2010 running, the simple dead-time was set to 125 ns and the complex dead-time to 8 triggers in 80 µs. This preventative dead-time is in addition to busy dead-time which can be introduced by ATLAS sub-detectors to temporarily throttle the trigger rate.
The CTP monitors the total L1 trigger rate and the rates of individual L1 triggers. These rates are monitored before and after prescales and after dead-time related vetoes have been applied. One use of this information is to provide a measure of the L1 dead-time, which needs to be accounted for when determining the luminosity. The L1 dead-time correction is determined from the live fraction, defined as the ratio of trigger rates after CTP vetoes to the corresponding trigger rates before vetoes. Figure 8 shows the live fraction based on the L1_MBTS_2 trigger (Sect. 6.1), the primary trigger used for these corrections in 2010. The bulk of the data were recorded with live fractions in excess of 98%. As a result of the relatively low L1 trigger rates and a bunch spacing that was relatively large (≥ 150 ns) compared to the nominal LHC spacing (25 ns), the preventive dead-time was typically below 10 −4 and no bunch-to-bunch variations in dead-time existed.
Towards the end of the 2010 data-taking a test was performed with a fill of bunch trains with 50 ns spacing, the running mode expected for the bulk of 2011 data-taking. The dead-time measured during this test is shown as a function of BCID in Fig. 9, taking a single bunch train as an example. The first bunch of the train (BCID 945) is only subject to sub-detector dead-time of ∼0.1%, while the following bunches in the train (BCIDs 947 to 967) are subject to up to 4% dead-time as a result of the preventative dead-time generated by the CTP. The variation in dead-time between bunch crossings will be taken into account when calculating the dead-time corrections to luminosity in 2011 running. Figure 10 shows the trigger rate for the whole data-taking period of 2010, compared to the luminosity evolution of the LHC. The individual rate points are the average L1 trigger rates in ATLAS runs with stable beams, and the luminosity points correspond to peak values for the run. The increasing selectivity of the trigger during the course of 2010 is illustrated by the fact that the L1 trigger rate increased by one order of magnitude; whereas, the peak instantaneous luminosity increased by five orders of magnitude. The L1 trigger  system was operated at a maximum trigger rate of just above 30 kHz, leaving more than a factor of two margin to the design rate of 75 kHz.

Rates and timing
The excellent level of synchronization of L1 trigger signals in time is shown in Fig. 11 for a selection of L1 triggers. The plot represents a snapshot taken at the end of

L1 calorimeter trigger
The L1 calorimeter trigger [9] is based on inputs from the electromagnetic and hadronic calorimeters covering the region |η| < 4.9. It provides triggers for localized objects (e.g. electron/photon, tau and jet) and global transverse energy triggers. The pipelined processing and logic is performed in a series of custom built hardware modules with a latency of less than 1 µs. The architecture, calibration and performance of this hardware trigger are described in the following subsections.

L1 calorimeter trigger architecture
The L1 calorimeter trigger decision is based on dedicated analogue trigger signals provided by the ATLAS calorimeters independently from the signals read out and used at the HLT and offline. Rather than using the full granularity of the calorimeter, the L1 decision is based on the information from analogue sums of calorimeter elements within projective regions, called trigger towers. The trigger towers have a size of approximately η × φ = 0.1 × 0.1 in the central part of the calorimeter, |η| < 2.5, and are larger and The 7168 analogue inputs must first be digitized and then associated to a particular LHC bunch crossing. Much of the tuning of the timing and transverse energy calibration was performed during the 2010 data-taking period since the final adjustments could only be determined with colliding beam events. Once digital transverse energies per LHC bunch crossing are formed, two separate processor systems, working in parallel, run the trigger algorithms. One system, the cluster processor uses the full L1 trigger granularity information in the central region to look for small localized clusters typical of electron, photon or tau particles. The other, the jet and energy-sum processor, uses 2 × 2 sums of trigger towers, called jet elements, to identify jet candidates and form global transverse energy sums: missing transverse energy, total transverse energy and jet-sum transverse energy. The magnitude of the objects and sums are compared to programmable thresholds to form the trigger decision. The thresholds used in 2010 are shown in Table 1 in Sect. 2.
The details of the algorithms can be found elsewhere [9] and only the basic elements are described here. Figure 12 illustrates the electron/photon and tau triggers as an example. The electron/photon trigger algorithm identifies an Region of Interest as a 2 × 2 trigger tower cluster in the electromagnetic calorimeter for which the transverse energy sum from at least one of the four possible pairs of nearest neighbour towers (1 × 2 or 2 × 1) exceeds a pre-defined thresh-old. Isolation-veto thresholds can be set for the 12-tower surrounding ring in the electromagnetic calorimeter, as well as for hadronic tower sums in a central 2 × 2 core behind the cluster and the 12-tower hadronic ring around it. Isolation requirements were not applied in 2010 running. Jet RoIs are defined as 4 × 4, 6 × 6 or 8 × 8 trigger tower windows for which the summed electromagnetic and hadronic transverse energy exceeds pre-defined thresholds and which surround a 2 × 2 trigger tower core that is a local maximum. The location of this local maximum also defines the coordinates of the jet RoI.
The real-time output to the CTP consists of more than 100 bits per bunch crossing, comprising the coordinates and threshold bits for each of the RoIs and the counts of the number of objects (saturating at seven) that satisfy each of the electron/photon, tau and jet criteria.

L1 calorimeter trigger commissioning and rates
After commissioning with cosmic ray and collision data, including event-by-event checking of L1 trigger results against offline emulation of the L1 trigger logic, the calorimeter trigger processor ran stably and without any algorithmic errors. Bit-error rates in digital links were less than 1 in 10 20 . Eight out of 7168 trigger towers were non-operational in 2010 due to failures in inaccessible analogue electronics on the detector. Problems with detector high and low voltage led to an additional ∼1% of trigger towers with low or no response. After calibration adjustments, L1 calorimeter trigger conditions remained essentially unchanged for 99% of the 2010 proton-proton integrated luminosity.
The scaling of the L1 trigger rates with luminosity is shown in Fig. 13 for some of the low-threshold calorimeter trigger items. The localised objects, such as electrons and Global quantities such as the missing transverse energy and total transverse energy triggers also scale in a smooth way, but are not linear as they are strongly affected by in-time pile-up which was present in the later running periods.

L1 calorimeter trigger calibration
In order to assign the calorimeter tower signals to the correct bunch crossing, a task performed by the bunch crossing identification logic, the signals must be synchronized to the LHC clock phase with nanosecond precision. The timing synchronization was first established with calorimeter pulser systems and cosmic ray data and then refined using the first beam delivered to the detector in the splash events (Sect. 3). During the earliest data-taking in 2010 the correct bunch crossing was determined for events with transverse energy above about 5 GeV. Timing was incrementally improved, and for the majority of the 2010 data the timing of most towers was better than ±2 ns, providing close to ideal performance.
In order to remove the majority of fake triggers due to small energy deposits, signals are processed by an optimized filter and a noise cut of around 1.2 GeV is applied to the trigger tower energy. The efficiency for an electromagnetic tower energy to be associated to the correct bunch crossing and pass this noise cut is shown in Fig. 14 as a function of the sum of raw cell E T within that tower, for different regions of the electromagnetic calorimeter. The efficiency turn-on is consistent with the optimal performance expected from a simulation of the signals and the full efficiency in the plateau region indicates the successful association of these small energy deposits to the correct bunch crossing.
Special treatment, using additional bunch crossing identification logic, is needed for saturated pulses with E T above  about 250 GeV. It was shown that BCID logic performance was more than adequate for 2010 LHC energies, working for most trigger towers up to transverse energies of 3.5 TeV and beyond. Further tuning of timing and algorithm parameters will ensure that the full LHC energy range is covered.
In order to obtain the most precise transverse energy measurements, a transverse energy calibration must be applied to all trigger towers. The initial transverse energy calibration was produced by calibration pulser runs. In these runs signals of a controlled size are injected into the calorimeters. Subsequently, with sufficient data, the gains were recalibrated by comparing the transverse energies from the trigger with those calculated offline from the full calorimeter information. By the end of the 2010 data-taking this analysis had been extended to provide a more precise calibration on a tower-by-tower basis. In most cases, the transverse energies derived from the updated calibration differed by less than 3% from those obtained from the original pulser-run based calibration. Examples of correlation plots between trigger and offline calorimeter transverse energies can be seen in Fig. 15. In the future, with even larger datasets, the towerby-tower calibration will be further refined based on physics objects with precisely known energies, for example, electrons from Z boson decays.

L1 muon trigger
The L1 muon trigger system [1,10] is a hardware-based system to process input data from fast muon trigger detectors. The system's main task is to select muon candidates and identify the bunch crossing in which they were produced. The primary performance requirement is to be efficient for muon p T thresholds above 6 GeV. A brief overview of the L1 muon trigger is given here; the performance of the muon trigger is presented in Sect. 6.3.

L1 muon trigger architecture
Muons are triggered at L1 using the RPC system in the barrel region (|η| < 1.05) and the TGC system in the end-cap regions (1.05 < |η| < 2.4), as shown in Fig. 16. The RPC and TGC systems provide rough measurements of muon candidate p T , η, and φ. The trigger chambers are arranged in three planes in the barrel and three in each endcap (TCG I shown in Fig. 16 did not participate in the 2010 trigger). Each plane is composed of two to four layers. Muon candidates are identified by forming coincidences between the muon planes. The geometrical coverage of the trigger in the end-caps is ≈99%. In the barrel the coverage is reduced to ≈80% due to a crack around η = 0, the feet and rib support structures for the ATLAS detector and two small elevators in the bottom part of the spectrometer.
The L1 muon trigger logic is implemented in similar ways for both the RPC and TCG systems, but with the following differences: − The planes of the RPC system each consist of a doublet of independent detector layers, each read out in the η (z) and φ coordinates. A low-p T trigger is generated by requiring a coincidence of hits in at least 3 of the 4 layers of the inner two planes, labelled as RPC1 and RPC2 in Fig. 16. The high-p T logic starts from a low-p T trigger, then looks for hits in one of the two layers of the high-p T confirmation plane (RPC3). − The two outermost planes of the TGC system (TGC2 and TGC3) each consist of a doublet of independent detectors read out by strips to measure the φ coordinate and wires to measure the η coordinate. A low-p T trigger is generated by a coincidence of hits in at least 3 of the 4 layers of the outer two planes. The inner plane (TGC1) contains 3 detector layers, the wires are read out from all of these, but the strips from only 2 of the layers. The high-p T trigger requires at least one of two φ-strip layers and 2 out of 3 wire layers from the innermost plane in coincidence with the low-p T trigger. In both the RPC and TGC systems, coincidences are generated separately for η and φ and can then be combined with programmable logic to form the final trigger result. The configuration for the 2010 data-taking period required a logical AND between the η and φ coincidences in order to have a muon trigger. In order to form coincidences, hits are required to lie within parametrized geometrical muon roads. A road represents an envelope containing the trajectories, from the nominal interaction point, of muons of either charge with a p T above a given threshold. Example roads are shown in Fig. 16. There are six programmable p T thresholds at L1 (see Table 1) which are divided into two sets: three low-p T thresholds to cover values up to 10 GeV, and three high-p T thresholds to cover p T greater than 10 GeV.
To enable the commissioning and validation of the performance of the system for 2010 running, two triggers were defined which did not require coincidences within roads and thus gave maximum acceptance and minimum trigger bias. One (MU0) based on low-p T logic and the other (MU0_COMM) based on the high-p T logic. For these triggers the only requirement was that hits were in the same trigger tower (η × φ ∼ 0.1 × 0.1).

L1 muon trigger timing calibration
In order to assign the hit information to the correct bunch crossing, a precise alignment of RPC and TGC signals, or timing calibration, was performed to take into account signal delays in all components of the read out and trigger chain. Test pulses were used to calibrate the TGC timing to within 25 ns (one bunch crossing) before the start of data-taking. Tracks from cosmic ray and collision data were used to calibrate the timing of the RPC system. This calibration required a sizable data sample to be collected before a time The timing alignment with respect to the LHC bunch clock (25 ns units) for the RPC system (before and after the timing calibration) and the TGC system alignment of better than 25 ns was reached. As described in Sect. 4.1, the CTP imposes a 25 ns window about the nominal bunch crossing time during which signals must arrive in order to contribute to the trigger decision. In the first phase of the data-taking, while the timing calibration of the RPC system was on-going, a special CTP configuration was used to increase the window for muon triggers to 75 ns. The majority of 2010 data were collected with both systems aligned to within one bunch crossing for both high-p T and low-p T triggers. In Fig. 17 the timing alignment of the RPC and TGC systems is shown with respect to the LHC bunch clock in units of the 25 ns bunch crossings (BC).

High level trigger reconstruction
The HLT has additional information available, compared to L1, including inner detector hits, full information from the calorimeter and data from the precision muon detectors. The HLT trigger selection is based on features reconstructed in these systems. The reconstruction is performed, for the most part, inside RoIs in order to minimize execution times and reduce data requests across the network at L2. The sections below give a brief description of the algorithms for inner detector tracking, beamspot measurement, calorimeter clustering and muon reconstruction. The performance of the algorithms is presented, including measurements of execution times which meet the timing constraints outlined in Sect. 2.

Inner detector tracking
The track reconstruction in the Inner Detector is an essential component of the trigger decision in the HLT. A robust and efficient reconstruction of particle trajectories is a prerequisite for triggering on electrons, muons, B-physics, taus, and b-jets. It is also used for triggering on inclusive pp interactions and for the online determination of the beamspot (Sect. 5.2), where the reconstructed tracks provide the input to reconstruction of vertices. This section gives a short description of the reconstruction algorithms and an overview of the performance of the track reconstruction with a focus on tracking efficiencies in the ATLAS trigger system.

Inner detector tracking algorithms
The L2 reconstruction algorithms are specifically designed to meet the strict timing requirements for event processing at L2. The track reconstruction at the EF is less time constrained and can use, to a large extent, software components from the offline reconstruction. In both L2 and EF the track finding is preceded by a data preparation step in which detector data are decoded and transformed to a set of hit positions in the ATLAS coordinate system. Clusters are first formed from adjacent signals on the SCT strips or in the Pixel detector. Two-dimensional Pixel clusters and pairs of one-dimensional SCT clusters (from back-to-back detectors rotated by a small stereo angle with respect to one another) are combined with geometrical information to provide threedimensional hit information, called space-points. Clusters and space-points provide the input to the HLT pattern recognition algorithms.
The primary track reconstruction strategy is inside-out tracking which starts with pattern recognition in the SCT and Pixel detectors; track candidates are then extended to the TRT volume. In addition, the L2 has an algorithm that reconstructs tracks in the TRT only and the EF has an additional track reconstruction strategy that is outside-in, starting from the TRT and extending the tracks to the SCT and Pixel detectors.
Track reconstruction at both L2 and EF is run in an RoIbased mode for electron, muon, tau and b-jet signatures. B-physics signatures are based either on a FullScan (FS) mode (using the entire volume of the Inner Detector) or a large RoI. The tracking algorithms can be configured differently for each signature in order to provide the best performance.
L2 uses two different pattern recognition strategies: − A three-step histogramming technique, called IdScan. First, the z-position of the primary vertex, z v , is determined as follows. The RoI is divided into φ-slices and z-intercept values are calculated and histogrammed for lines through all possible pairs of space-points in each phi-slice; z v is determined from peaks in this histogram. The second step is to fill a (η, φ) histogram with values calculated with respect to z v for each space-point in the RoI; groups of space-points to be passed on to the third step are identified from histogram bins containing at least four space-points from different detector layers. In the third step, a (1/p T , φ) histogram is filled from values calculated for all possible triplets of space-points from different detector layers; track candidates are formed from bins containing at least four space-points from different layers. This technique is the approach used for electron, muon and B-physics triggers due to the slightly higher efficiency of IdScan relative to SiTrack. − A combinatorial technique, called SiTrack. First, pairs of hits consistent with a beamline constraint are found within a subset of the inner detector layers. Next, triplets are formed by associating additional hits in the remaining detector layers consistent with a track from the beamline. In the final step, triplets consistent with the same track trajectory are merged, duplicate or outlying hits are removed and the remaining hits are passed to the track fitter. SiTrack is the approach used for tau and jet triggers as well as the beamspot measurement as it has a slightly lower fake-track fraction.
In both cases, track candidates are further processed by a common Kalman [11] filter track fitter and extended to the TRT for an improved p T resolution and to benefit from the electron identification capability of the TRT. The EF track reconstruction is based on software shared with the offline reconstruction [12]. The offline software was extended to run in the trigger environment by adding support for reconstruction in an RoI-based mode. The pattern recognition in the EF starts from seeds built from triplets of space-points in the Pixel and SCT detectors. Triplets consist of space-points from different layers, all in the pixel detector, all in the SCT or two space-points in the pixel detector and one in the SCT. Seeds are preselected by imposing a minimum requirement on the momentum and a maximum requirement on the impact parameters. The seeds define a road in which a track candidate can be formed by adding additional clusters using a combinatorial Kalman filter technique. In a subsequent step, the quality of the track candidates is evaluated and low quality candidates are rejected. The tracks are then extended into the TRT and a final fit is performed to extract the track parameters.

Inner detector tracking algorithms performance
The efficiency of the tracking algorithms is studied using specific monitoring triggers, which do not require a track to be present for the event to be accepted, and are thus unbiased for track efficiency measurements. The efficiency is defined as the fraction of offline reference tracks that are matched to a trigger track (with matching requirement R = φ 2 + η 2 < 0.1). Offline reference tracks are required to have |η| < 2.5, |d 0 | < 1.5 mm, |z 0 | < 200 mm and |(z 0 − z V ) sin θ | < 1.5 mm, where d 0 and z 0 are the transverse and longitudinal impact parameters, and z V is the position of the primary vertex along the beamline as reconstructed offline. The reference tracks are also required to have one Pixel hit and at least six SCT clusters. For tau and jet RoIs, the reference tracks are additionally required to have χ 2 probability of the track fit higher than 1%, two Pixel hits, one in the innermost layer, and a total of at least seven SCT clusters.
The L2 and EF tracking efficiencies are shown as a function of p T for offline muon candidates in Fig. 18(a) and for offline electron candidates in Fig. 18(b). Tracking efficiencies in tau and jet RoIs are shown in Fig. 19, determined with respect to all offline reference tracks lying within the RoI. In all cases, the efficiency is close to 100% in the p T range important for triggering.    Fig. 20. Both L2 and EF show good agreement with offline, although the residuals between L2 and offline are larger, particularly at high |η| as a consequence of the speed-optimizations made at L2. Figure 21 shows the residuals in d 0 , φ and η. Since it uses offline software, EF tracking performance is close to that of the offline reconstruction. Performance is not identical, however, due to an online-specific configuration of offline software designed to increase speed and be more robust to compensate for the more limited calibration and detector status information available in the online environment.

Inner detector tracking algorithms timing
Distributions of the algorithm execution time at L2 and EF are shown in Fig. 22. The total time for L2 reconstruction is shown in Fig. 22(a) for a muon algorithm in RoI and FullScan mode. The times of the different reconstruction steps at the EF are shown in Fig. 22(b) for muon RoIs and in Fig. 22(c) for FullScan mode. The execution times are shown for all instances of the algorithm execution, whether the trigger was passed or not. The execution times are well within the online constraints.

Beamspot
The online beamspot measurement uses L2 ID tracks from the SiTrack algorithm (Sect. 5.1) to reconstruct primary vertices on an event-by-event basis [13]. The vertex position distributions collected over short time intervals are used to measure the position and shape of the luminous region, beamspot, parametrized by a three-dimensional Gaussian. The coordinates of the centroids of reconstructed vertices determine the average position of the collision point in the ATLAS coordinate system as well as the size and orientation of the ellipsoid representing the luminous region in the horizontal (x-z) and vertical (y-z) planes.
These observables are continuously reconstructed and monitored online in the HLT, and communicated, for each luminosity block, to displays in the control room. In addition, the instantaneous rate of reconstructed vertices can be used online as a luminosity monitor. Following these online measurements, a system for applying real-time configura-tion changes to the HLT farm distributes updates for use by trigger algorithms that depend on the precise knowledge of the luminous region, such as b-jet tagging (Sect. 6.7). Figure 23 shows the variation of the collision point centroid around the nominal beam position in the transverse plane (y nominal ) over a period of a few weeks. The nominal beam position, which is typically up to several hundred microns from the centre of the ATLAS coordinate system, is defined by a time average of previous measured centroid positions. The figure shows that updates distributed to the online system as a part of the feedback mechanism take account of the measured beam position within a narrow band of only a few microns. The large deviations on Oct 4 and Sept 22 are from beam-separation scans.
During 2010 data-taking, beamspot measurements were averaged over the entire period of stable beam during a run and updates applied, for subsequent runs, in the case of significant shifts. For 2011 running, when triggers that are sensitive to the beamspot position, such as the b-jet trigger (Sect. 6.7), are activated, updates will be made more frequently.

Beamspot algorithm
The online beamspot algorithm employs a fast vertex fitter able to efficiently fit the L2 tracks emerging from the interaction region to common vertices within a fraction of the L2 time budget. The tracks used for the vertex fits are required to have at least one Pixel space-point and three SCT space-points and a transverse impact parameter with respect to the nominal beamline of |d 0 | < 1 cm. Clusters of tracks with similar impact parameter (z 0 ) along the nominal beamline form the input to the vertex fits. The tracks are ordered in p T and the highest-p T track above 0.7 GeV is taken as a seed. The seed track is grouped with all other tracks with p T > 0.5 GeV within z 0 < 1 cm. The average z 0 value of the tracks in the group provides the initial estimate of the vertex position in the longitudinal direction, used as a starting point for the vertex fitter. In order to find additional vertices in the event, the process is repeated taking the next highest p T track above 0.7 GeV as the seed.

Beamspot algorithm performance
Using the event-by-event vertex distribution computed in real-time by the HLT and accumulated in intervals of typically two minutes, the position, size and tilt angles of the luminous region within the ATLAS coordinate system are measured. A view of the transverse distribution of vertices reconstructed by the HLT is shown in Fig. 24 along with the transverse (x and y) and longitudinal (z) profiles.
The measurement of the true size of the beam relies on an unfolding of the intrinsic resolution of the vertex position 14 ± 0.07) mm measurement. A correction for the intrinsic resolution is determined, in real-time, by measuring the distance between two daughter vertices constructed from a primary vertex when its tracks are split into two random sets for re-fitting.
This correction method has the benefit that it allows the determination of the beam width to be relatively independent of variations in detector resolution, by explicitly taking the variation into account.

Fig. 25
The corrected width of the measured vertex position, in x, along with the measured intrinsic resolution and the raw measured width before correction for the resolution. The asymptotic value of the corrected width provides a measurement of the true beam width Figure 25 shows the measured beam width, in x, as a function of the number of tracks per vertex. The raw measured width is shown as well as the width after correction for the intrinsic resolution of the vertex position measurement. The measured intrinsic resolution is also shown. The intrinsic resolution is overestimated, and hence the corrected width is underestimated, for vertices with a small number of tracks. The true beam width (50 µm) is, therefore, given by the asymptotic value of the corrected width. For this reason vertices used for the beam width measurement are required to have more than a minimum number of tracks. The value of this cut depends on the beamspot size. Data and MC studies have shown that intrinsic resolution must be less than about two times the beamspot size to be measured. For the example fill shown in Fig. 25, this requirement corresponds to 10 tracks per vertex. To resolve smaller beam sizes, the multiplicity requirement can be raised accordingly.

Calorimeter
The calorimeter reconstruction algorithms are designed to reconstruct clusters of energy from electrons, photons, taus and jet objects using calorimeter cell information. At the EF, global E miss T is also calculated. Calorimeter information is also used to provide information to the muon isolation algorithms.
At L2, custom algorithms are used to confirm the results of the L1 trigger and provide cluster information as input to the signature-specific selection algorithms. The detailed calorimeter cell information available at the HLT allows the position and transverse energy of clusters to be calculated with higher precision than at L1. In addition, shower shape variables useful for particle identification are calculated. At the EF, offline algorithms with custom interfaces for online running are used to reproduce offline clustering performance as closely as possible, using similar calibration procedures. More details on the HLT and offline clustering algorithms can be found in Refs. [10,14].

Calorimeter algorithms
While the clustering tools used in the trigger are customized for the different signatures, they take their input from a common data preparation software layer. This layer, which is common to L2 and the EF, requests data using the general trigger framework tools and drives sub-detector specific code to convert the digital information into the input objects (calorimeter cells with energy and geometry) used by the algorithms. This code is optimized to guarantee fast unpacking of detector data. The data is organized so as to allow efficient access by the algorithms. At the EF the calorimeter cell information is arranged using projective regions called towers, of size η × φ = 0.025 × 0.025 for EM clustering and η × φ = 0.1 × 0.1 for jet algorithms.
The L2 electron and photon (e/γ ) algorithm performs clustering withing an RoI of dimension η × φ = 0.4 × 0.4. The algorithm relies on the fact that most of the energy from an electron or photon is deposited in the second layer of the electromagnetic (EM) calorimeter. The cell with the most energy in this layer provides the seed to the clustering process. This cell defines the centre of a η × φ = 0.075 × 0.125 window within this layer. The cluster position is calculated by taking an energy-weighted average of cell positions within this window and the cluster transverse energy is calculated by summing the cell transverse energies within equivalent windows in all layers. Subsequently, a correction for the upstream energy loss and for lateral and longitudinal leakage is applied.
At the EF a clustering algorithm similar to the offline algorithm is used. Cluster finding is performed using a sliding window algorithm acting on the towers formed in the data preparation step. Fixed window clusters in regions of η × φ = 0.075 × 0.175 (0.125 × 0.125) are built in the barrel (end-caps). The cluster transverse energy and position are calculated in the same way as at L2. Distributions of E T residuals, defined as the fractional difference between online and offline E T values, are shown in Fig. 26 for L2 and EF. The broader L2 distribution is a consequence of the specialized fast algorithm used at L2.
The L2 tau clustering algorithm searches for a seed in all EM and hadronic calorimeter layers and within an RoI of η × φ = 0.6 × 0.6. At the EF the calorimeter cells within a η × φ = 0.8 × 0.8 region are used directly as input to a topological clustering algorithm that builds clusters of any shape by adding neighbouring cells that have energy above a given number (0-4) of standard deviations of the noise distribution. The large RoI size is motivated by the cluster size   Fig. 27.
The L2 jet reconstruction uses a cone algorithm iterating over cells in a relatively large RoI ( η × φ = 1.0 × 1.0). Figure 28 shows L2 φ and η residuals with respect to offline, showing reasonable agreement with simulation. The asymmetry, which is reproduced by the simulation, is due to the fact that L2 jet reconstruction, unlike offline, is performed within an RoI whose position is defined with the granularity of the L1 jet element size (Sect. 4.2). The L2 jet E T reconstruction and jet energy scale are discussed fur-ther in Sect. 6.4. During 2010, EF jet trigger algorithms ran online in monitoring mode i.e. without rejection. In 2011, the EF jet selection will be activated based on EF clustering within all layers of the calorimeter using the offline anti-k T jet algorithm [15].
Recalculation of E miss T at the HLT requires data from the whole calorimeter, and so was only performed at the EF where data from the whole event is available. Corrections to account for muons were calculated at L2, but these corrections were not applied during 2010 data-taking. Future improvements will allow E miss T to be recalculated at L2 based on transverse energy sums calculated in the calorimeter front-end boards. The E miss T reconstruction, which uses the common calorimeter data preparation tools, is described in Sect. 6.6.

Calorimeter algorithms timing
Figure 29(a) shows the processing time per RoI for the L2 e/gamma, tau and jet clustering algorithms, including data preparation. The processing time increases with the RoI Fig. 29 Execution times per RoI for calorimeter clustering algorithms at (a) L2 and (b) EF. The mean execution time for each algorithm is given in the legend size. The tau algorithm has a longer processing time than the e/γ algorithm due to the larger RoI size as well as the seed search in all layers. The distributions have multiple peaks due to caching of results in the HLT, which leads to shorter times when overlap of RoIs allows cached information to be used. Caching of L2 results occurs in two places: first, at the level of data requests from the readout buffers; second, in the data preparation step, where raw data is unpacked into calorimeter cell information. Most of the L2 time is consumed in requesting data from the detector buffers. Figure 29(b) shows the processing time per RoI for the EF e/gamma, tau, jet and E miss T clustering algorithms. Since more complex offline algorithms are used at the EF, the processing times are longer and the distributions have more features than for L2. The mean execution times do not show the same dependence on RoI size as at L2, since algorithm differences are more significant than RoI size at the EF. The multiple peaks due to caching of data preparation results are clearly visible. The measured L2 and EF algorithm times are well within the requirements given in Sect. 2.

Muon tracking
Muons are triggered in the ATLAS experiment within a rapidity range of |η| < 2.4 [1]. In addition to the L1 trigger chambers (RPC and TGC), the HLT makes use of information from the MDT chambers, which provide precision hits in the η coordinate. The CSC, that form the innermost muon layer in the region 2 < |η| < 2.7, were not used in the HLT during 2010 data-taking period, but will be used in 2011.

Muon tracking algorithms
The HLT includes L2 muon algorithms that are specifically designed to be fast and EF algorithms that rely on offline muon reconstruction software [10].
At L2, each L1 muon candidate is refined by including the precision data from the MDTs in the RoI defined by the L1 candidate. There are three algorithms used sequentially at L2, each building on the results of the previous step.

L2 MS-only:
The MS-only algorithm uses only the Muon Spectrometer information. The algorithm uses L1 trigger chamber hits to define the trajectory of the L1 muon and opens a narrow road around this to select MDT hits. A track fit is then performed using the MDT drift times and positions and a p T measurement is assigned using look-up tables. L2 muon combined: This algorithm combines the MS-only tracks with tracks reconstructed in the inner detector (Sect. 5.1) to form a muon candidate with refined track parameter resolution. L2 isolated muon: The isolated muon algorithm starts from the result of the combined algorithm and incorporates tracking and calorimetric information to find isolated muon candidates. The algorithm sums the |p T | of inner detector tracks and evaluates the electromagnetic and hadronic energy deposits, as measured by the calorimeters, in cones centred around the muon direction. For the calorimeter, two different concentric cones are defined: an internal cone chosen to contain the energy deposited by the muon itself; and an external cone, containing energy from detector noise and other particles.
At the EF, the muon reconstruction starts from the RoI identified by L1 and L2, reconstructing segments and tracks using information from the trigger and precision chambers. There are three different reconstruction strategies used in the EF:

Muon tracking performance
Comparisons between online and offline muon track parameters are presented in this section; muon trigger efficiencies are presented in Sect. 6.3. Distributions of the residuals between online and offline track parameters ( 1 p T , η and φ) were constructed in bins of p T and Gaussian fits were performed to extract the widths, σ , of the residual distributions as a function of p T . The inverse-p T residual widths, Fig. 30 as a function  Fig. 31(a) and Fig. 31(b) respectively. These figures show the residual widths for L2 and EF combined reconstruction and illustrate the good agreement between track parameters calculated online and offline.

Muon tracking timing
The processing times for the L2 muon reconstruction algorithms are shown in Fig. 32(a) for the MS-only algorithm and for the combined reconstruction chain, which includes the ID track reconstruction time. Figure 32(b) shows the corresponding times for the EF algorithms. The execution times are measured for each invocation of the algorithm, and are well within the time restrictions for both L2 and EF given in Sect. 2.

Trigger signature performance
In this section the different trigger signature selection criteria are described. The principal triggers used in 2010 are listed, their performance is presented and compared with Tag and probe method, where the event contains a pair of related objects reconstructed offline, such as electrons from a Z → ee decay, one that triggered the event and the other that can be used to measure trigger efficiency; Orthogonal triggers method, where the event is triggered by a different and independent trigger from the one for which the efficiency is being determined; Bootstrap method, where the efficiency of a higher threshold is determined using a lower threshold to trigger the event.
An example of the tag and probe method is the determination of low-p T muon trigger efficiencies using J /ψ → μμ events. In this method, μμ pairs are selected from J /ψ → μμ decays reconstructed offline in events triggered by a single muon trigger. The tag is selected by matching (in R) one of the offline muons with a trigger muon that passed the trigger selection. The other muon in the μμ pair is defined as the probe. The efficiency is then defined as the fraction of probe muons that match (in R) a trigger muon that passes the trigger selection. An efficiency determined in this way must be corrected for background due to fake J /ψ → μμ decays reconstructed offline. The background subtraction uses a variable that discriminates the signal from the background, in this case, the invariant mass of μμ candidates. By fitting this variable with an ex-ponential background shape in the side bands and with a Gaussian signal shape in the J /ψ mass region, the background content in the J /ψ mass region can be determined and subtracted. The subtracted distribution is then used to determine the trigger efficiency. Biases due to, for example, topological correlations, are determined by MC.

Minimum bias, high multiplicity and luminosity triggers
Triggers were designed for inclusive inelastic event selection with minimal bias, for use in inclusive physics studies as well as luminosity measurements. Events selected by the minimum bias (minbias) trigger are used directly for physics analyses of inelastic pp interactions [16,17], PbPb interactions [18], as well as indirectly as control samples for other physics analyses. A high multiplicity trigger is also implemented for studies of two-particle correlations in highmultiplicity events.

Reconstruction and selection criteria
The minbias and luminosity triggers are primarily hardwarebased L1 triggers, defined using signals from the Minimum Bias Trigger Scintillators (MBTS), a Cherenkov light detector (LUCID), the Zero Degree Calorimeter (ZDC), and the random clock from the CTP. In addition to these L1 triggers, HLT algorithms are defined using inner detector and MBTS information (Sect. 2). In 2010, inelastic pp events were primarily selected with the L1_MBTS_1 trigger requirement, defined as having at least one of the 32 MBTS counters on either side of the detector above threshold. Several supporting MBTS requirements were also defined in case of higher beam-induced backgrounds and for online luminosity measurements. For some of these triggers (e.g. L1_MBTS_1_1) a coincidence was required between the signals from the counters on either side of the detector. In all cases, a coincidence with colliding bunches was required. During the PbPb running the beam backgrounds were found to be significantly higher and selections requiring more MBTS counters above threshold on both sides of the detector were used.
The mbSpTrk trigger [19], used for minbias trigger efficiency measurements, selects events using the random clock of the CTP at L1 and inner detector tracker silicon spacepoints (Sect. 5.1) at the HLT.
The LUCID triggers were used to select events for comparison with real-time luminosity measurements. LUCID trigger items required a LUCID signal above threshold on one side, 2 either side, or both sides of the detector. In all 2 The ±z sides of the ATLAS detector are named "A" and "C". cases a coincidence with colliding proton bunches was required.
The ZDC detector was included in the ATLAS experiment primarily for selection of PbPb interactions with minimal bias. Due to the ejection of neutrons from colliding ions, the ZDC covers most of the inelastic PbPb cross-section, but not the inelastic pp cross-section. Like the LUCID triggers, the ZDC triggers included a one-sided, either side, and twosided trigger.
The high multiplicity trigger was based on a L1 total energy trigger and includes requirements on the number of L2 SCT space-points and the number of EF inner detector tracks associated to a single vertex.
The Beam Conditions Monitor (BCM) detectors were used to trigger on events with higher than nominal beam background conditions and were also used to monitor the luminosity.

Menu and rates
The main minbias, high multiplicity and luminosity triggers used in the 2010 run are shown in Table 6. These triggers were prescaled for the majority of the 2010 data-taking to keep the rates around a few Hz.

Minimum bias trigger efficiency
The efficiency of the L1_MBTS_1 trigger was studied in the context of the charged particle multiplicity analysis [17] which used the L1_MBTS_1 trigger to select its dataset. The efficiency of the L1_MBTS_1 trigger was determined using the mbSpTrk trigger as an orthogonal trigger. The efficiency was defined as the fraction of events triggered by mbSpTrk passing the offline selection of an inelastic pp interaction that also passed the L1_MBTS_1 trigger. This efficiency was determined with respect to offlineselected events containing at least two good tracks with p T > 100 MeV, |η| < 2.5, and transverse impact parameter with respect to the beamspot satisfying d BS 0 < 1.8 mm. Events with more than one interaction were vetoed. Figure 33 shows the L1_MBTS_1 efficiency as a function of the number of selected offline tracks per event, N Track , in the data sample. The inefficiency in the low N Track region is small but visible.
One source of systematic uncertainty in the measured efficiency is a possible correlation between the control trigger (mbSpTrk) and L1_MBTS_1. The trigger efficiency of L1_MBTS_1 in the MC inelastic sample was calculated with and without the control trigger. The difference was found to be negligible. A second source investigated was the different impact parameter requirements from those in the offline selection. The trigger efficiency was studied with various sets of these requirements and the largest difference

Fig. 33
The L1_MBTS_1 trigger efficiency for inelastic pp collisions at √ s = 7 TeV. The shaded areas represent the statistical and systematic uncertainties added in quadrature. The statistical uncertainty is negligible compared to the systematic uncertainty among these sets in each bin was taken as the systematic uncertainty for that bin. This variation provides a very conservative estimate of the effect of beam-induced background and secondary tracks on the trigger efficiency.
The efficiency of the ZDC trigger was measured in PbPb collisions using a procedure similar to that used for the initial L1_MBTS_1 efficiency measurement. The efficiency is shown as a function of the number of tracks in the event in Fig. 34.

Electrons and photons
Events with electrons and photons (e/γ ) in the final state are important signatures for many ATLAS physics analy- ses, from SM precision physics, such as top quark or W boson mass measurement, to searches for new physics. Various triggers cover the energy range between a few GeV and several TeV. In the low-E T range (5-15 GeV), the data collected are used for measuring the cross sections and properties of standard candle processes, such as J /ψ → ee, di-photon, low mass Drell-Yan, and Z → τ τ production. The data collected in the higher E T range (>15 GeV) are used to measure the production cross-sections for top quark pairs, direct photons and for the Z → ee and W → eν channels [20][21][22][23], as well as searches for new physics such as Higgs bosons, SUSY and exotic particles as in extradimension models [24,25]. Some of these channels, such as J /ψ → ee, Z → ee, W → eν and γ + jet, are valuable benchmarks to extract the calibration and alignment constants, and to establish the detector performance.

Electron and photon reconstruction and selection criteria
Electrons and photons are reconstructed in the trigger system in the region |η| < 2.5. At L1, photons and electrons are selected using calorimeter information with reduced granularity. For each identified electromagnetic object, RoIs are formed containing the η and φ directions and the transverse energy thresholds that have been passed, e.g. EM5, EM10, as specified by the L1 trigger menu (Table 1). Seeded by the position of the L1 cluster, the L2 photon and electron selections employ a fast calorimeter reconstruction algorithm (Sect. 5.3), and in the case of electrons also fast track reconstruction (Sect. 5.1). The EF also performs calorimeter cluster and track reconstruction, but uses the offline reconstruction algorithms [10]. At L2 and the EF a calorimeter-based selection is made, for both electrons and photons, based on cluster E T and cluster shape parameters. Distributions of two important parameters are shown in Fig. 35(a). The hadronic leakage parameter, R had = E had T /E EM T , is the ratio of the cluster transverse energy in the hadronic calorimeter to that in the electromagnetic calorimeter; the distribution for offline reconstructed electrons is shown in Fig. 35(b) for L2. Figure 35(b) shows the distribution, at the EF, of the parameter E ratio = (E (1) T − E (2) T )/(E (1) T + E (2) T ) where E (1) T and E (2) T are the transverse energies of the two most energetic cells in the first layer of the electromagnetic calorimeter in a region of η × φ = 0.125 × 0.2. The distribution of this parameter peaks at one for showers with no substructure and so distinguishes clusters due to single electrons and photons from hadrons and π 0 → γ γ decays. Another important parameter, R η , is based on the cluster shape in the second layer of the electromagnetic calorimeter; it is defined as the ratio of transverse energy in a core region of 3 × 7 cells in η × φ to that in a 7 × 7 region, expanded in η from the 3 × 7 core. In addition, the electron selection requires that a track be matched to the calorimeter cluster.
For electrons, three sets of reference cuts are defined with increasing power to reject background: loose, medium, and tight. All selections include the same cuts on the shower shape parameter, R η , and hadronic leakage parameter, R had . The medium selection adds cuts on the shower shape in the first calorimeter layer, E ratio , track quality requirements and stricter cluster-track matching. The tight selection adds, on top of the medium selection, requirements on the ratio, E T /p T , of calorimeter cluster E T to inner detector track p T , a requirement for a hit on the innermost tracking layer, and particle identification by the TRT. For photons, two reference sets of cuts, loose and tight, are defined. Only the loose selections were used for triggering in 2010. The loose photon selection is the same as the calorimeter-based part of the loose electron selection. The tight selection, in addition, applies cuts on cluster shape in the first calorimeter layer, E ratio , and further requirements on cluster shape in the second calorimeter layer. For more detailed information on e/γ triggers in 2010, see Ref.
[26]. Table 7 gives an overview of the rates of the main e/γ triggers used in the 2010 menu for instantaneous luminosities around 10 32 cm −2 s −1 . The E T thresholds of the electron and photon triggers range from 5 GeV to 40 GeV. In addition, supporting triggers were deployed, which were used for efficiency extraction, monitoring, commissioning and cross-checks. The overall rate of the e/γ trigger stream was  The L1 and HLT trigger rates of e/γ triggers are shown in Fig. 36 as a function of luminosity. No significant deviation from linearity was observed during 2010 running. It should be noted that during the course of 2010, no deterioration in performance of e/γ triggers or effect on rates was observed due to in-time or out-of-time pile-up.

Electron and photon trigger efficiencies
Trigger efficiencies are presented for electrons and photons identified by the offline reconstruction. More details are given in Ref. [26], including a full study of the systematic uncertainties in the plateau efficiencies which amount to ∼0.4% for the electron trigger and ∼1% for the photon trigger. The EF selection of electrons and photons is very similar to the offline identification: the same criteria are used for loose, medium and tight selections in offline reconstruction as detailed in Sect. 6.2.1.
The determination of the efficiencies of electron and photon triggers share the following common selection criteria. Collision event candidates are selected by requiring a primary vertex with at least three tracks. Rare events that contain very localised high-energy calorimeter deposits not originating from proton-proton collisions, for example from sporadic discharges in the calorimeter or cosmic ray muons undergoing a hard bremsstrahlung, are removed, resulting in predicted losses of less than 0.1% of minimum-bias events and 0.004% of W → eν events [27]. In addition, events are rejected if the candidate electromagnetic cluster is located in a problematic region of the EM calorimeter, for example where the effect of inactive cells could be significant. Due to hardware problems [28], the signal could not be read out from ∼2% of the EM calorimeter cells in 2010. Offline electrons are selected if they are within the region |η| < 2.47 and outside the transition between the barrel and end-caps of the EM calorimeter, 1.37 < |η| < 1.52. The acceptance region for photons is limited to |η| < 2.37 due to the geometrical acceptance of the first layer of the EM calorimeter (fine strips in the η direction), which is crucial for the rejection of background photons originating from π 0 decay. The decays Z → ee and W → eν provide samples to measure the electron trigger efficiency in the higher-E T range (>15 GeV). The Z → ee decays provide a sample of electrons to use with the tag-and-probe method. In the case of W → eν decays, the orthogonal trigger method is employed, using the E miss T triggers with thresholds between 20 and 40 GeV to collect the data sample. Figure 37 compares the efficiencies of the e15_medium and e20_loose triggers at the EF, measured in W boson events with those measured in The dominant contribution (0.4%) to the systematic uncertainty in the plateau efficiency comes from an analysis of the spread of differences in efficiency between data and simulation as a function of E T and η. Figure 37(b) shows that the response in η is flat except at the outer edges of the endcaps. Above 20 GeV the e15_medium trigger efficiency for W → eν and Z → ee events is greater than 99%.
In contrast to electrons, there is no suitable decay channel that would allow the trigger efficiency to be measured for prompt photons in the ∼10-50 GeV energy range using tag and probe or orthogonal triggers. Therefore, the bootstrap method is used, where the HLT efficiency is measured for events that pass a lower L1 E T threshold. For example, the g20_loose efficiency is measured using a sample of events passing the 14 GeV E T L1 threshold (EM14). In most physics analyses, the photons are selected offline with tight identification requirements. Thus, the trigger efficiency is shown with respect to photons identified with the tight offline requirements. The bootstrap method relies on measuring the HLT efficiency in a p T region where the L1 trigger is fully efficient with respect to offline photons. It has been verified that L1_EM14 is fully efficient for photon clusters with E T > 20 GeV using a sample of events selected by the L1_EM5 trigger. The bootstrap method suffers from a large contamination of fake photons, such as hadronic jet clusters mis-reconstructed as photons. The bias on the measured efficiency has been estimated to be less than ∼0.25% for photons with E T > 25 GeV by comparing the efficiencies from data with those from a signal-only simulation. Figure 38 shows the L2 and EF efficiencies for the g20_loose trigger, as functions of offline tight photon E T and η. For the η distribution, photons were selected with E T > 25 GeV in the plateau region of the turn-on curve. The L2 and EF g20_loose triggers reach the efficiency plateau at about E T = 25 GeV, with efficiencies above this threshold of greater than 99% for both L2 and EF. The efficiency remains flat, at the plateau value, as far as can be tested in the 2010 data, up to ∼500 GeV. The agreement between the efficiencies measured in data and simulated events is better than 1%.

Muons
Muons are produced in many final states of interest to the broad physics programme being conducted at the LHC, from SM precision physics, such as top quark and W boson mass measurements, to searches for new physics. Muons are identified with high purity compared to other signatures and cover a wide momentum range between a few GeV and several TeV. Trigger thresholds in the p T range 4-10 GeV are used to collect data for measurements of processes such as J /ψ → μμ, low-p T di-muons, and Z → τ τ . Higher p T thresholds are used to collect data used to measure the properties of SM particles such as W and Z bosons, and top quarks, [20,21,23] as well as to search for new physics, like the Higgs boson, SUSY [25] and extra-dimension models. Some of these channels, such as J /ψ → μμ, Z → μμ, and W → μν decays are valuable benchmarks to extract calibration and alignment constants, and to establish the detector performance.

Muon reconstruction and selection criteria
The trigger reconstruction algorithms for muons at L1 and the HLT are described in Sects. 4.3 and 5.4 respectively. The selection criteria applied to reconstructed muon candidates depend on the algorithm with which they were reconstructed. The MS-only algorithm selects solely on the p T of the muon; the combined algorithm makes selections based on the match between the inner detector and muon spectrometer tracks and their combined p T ; the isolated muon algorithm applies selection criteria based on the amounts of energy found in the isolation cones. Table 8 gives an overview of the principal muon triggers and their approximate rates at a luminosity of 10 32 cm −2 s −1 . In addition to these principal physics triggers, a range of supporting triggers were included for commissioning, monitoring, and efficiency measurements. In 2010 running, in order to maximize acceptance, all HLT selections were based on L1 triggers using the low-p T logic (described in Sect. 4.3), including mu13, mu20 and mu40 that were seeded from the L1 MU10 trigger.

Muon trigger menu and rates
The trigger rates at L1, L2, and EF are dependent on thresholds, algorithms (Sect. 5.4) and luminosity. The trigger rates have been measured as a function of the luminosity and parametrized with (1): where r is the rate, L the instantaneous luminosity, N BC the number of colliding bunches, and c 1 , c 0 are proportionality constants. The second term represents the contribution to the trigger rate from cosmic rays: as the number of colliding bunches increases, so does the amount of time the trigger gate is open to accept cosmic rays. The instantaneous luminosity was taken from the online measurements averaged over ten successive luminosity blocks.
The measured muon trigger rates are shown for L1 and EF in Fig. 39 together with lines representing the result of fitting (1) to the measurements. Steps in the rate are due to the increases in N BC , and hence the contribution to the rate from cosmic rays. This is significant at L1 and for algorithms using only the muon spectrometer data at the HLT. For combined algorithms, the contribution from cosmic rays to the rate is negligible (within the errors of the fit).

Muon trigger efficiency
The muon trigger efficiencies have been measured for offline muons [29]. The L1 RPC trigger efficiencies measured using an orthogonal L1 calorimeter trigger are shown in Fig. 40(a) for various thresholds. The efficiencies measured using the tag and probe method with J /ψ → μμ and Z → μμ decays are shown for the L1 TGC trigger in Fig. 40(b). The geometrical acceptance of the RPC low-p T trigger is about 80% which explains the lower efficiency compared to the TGC trigger, which has a geometrical acceptance close to 95%. For the RPC trigger, a further reduction in plateau efficiency is evident for the high-p T (p T >10 GeV) triggers compared to the low-p T triggers (p T ≤10 GeV). About half (6%) of this difference is due to a smaller geometrical coverage of the high-p T triggers. Part of this inefficiency will be recuperated in the muon spectrometer upgrade planned for 2013. The remaining difference is largely due to detector inefficiency which affects the high-p T trigger more than the lowp T trigger due to the additional coincidence requirements. Improved efficiency is expected for 2011 running.
The efficiency in the HLT was determined using the tag and probe method with J /ψ → μμ samples for low p T (6 GeV) triggers and Z → μμ for high p T (13 GeV) triggers. In both studies, collision events were selected by requiring that the event has at least three tracks associated with the same reconstructed primary vertex. Reference muons reconstructed offline using both ID and MS information were required to be inside the fiducial volume of the muon triggers (|η| < 2.4) and the associated ID track was required to have at least one Pixel hit and at least six SCT hits. Events were required to contain a pair of reference muons with opposite charge and an invariant mass lying within a window around the mass of the relevant resonance: 2.86 GeV < m μμ < 3.34 GeV for J /ψ → μμ decays and 77 GeV < m μμ < 106 GeV for Z → μμ decays. The resulting efficiency in the low-p T region for the mu6 trigger is shown in Fig. 41. For the high-p T region, Fig. 42 shows the efficiency as a function of p T for the mu13, mu20 and mu40_MSonly triggers in the TGC and RPC regions derived from the weighted average of the efficiency measured from the J /ψ and Z samples. Note that the 40 GeV threshold trigger has not yet reached its plateau efficiency in the highest p T bin in the figure; extending the figure to higher p T is limited by the small number of probe muons above 90 GeV. The efficiencies are seen to have a sharp turn-on with a plateau efficiency (p T > 13 GeV) for the mu13 trigger of 74% for the barrel region (dominated by the RPC geometrical acceptance), Fig. 42(a), and 91% for the end-cap region, Fig. 42(b). The systematic uncertainty on the plateau efficiency has been evaluated to be ∼1%.

Jets
Jet signatures are important for QCD measurements [30, 31], top quark measurements, and searches for new particles decaying into jets [32,33]. Data collected with jet triggers also provide important control samples for many other physics analyses. Jet triggers select events containing high p T clusters, and can be separated into four categories: inclusive jets (J), forward jets (FJ), multi-jets (nJ, n = 2, 3 . . .), and total jet E T (JE).

Jet reconstruction and selection criteria
For a large part of 2010 data-taking, only L1 jet triggers (Sect. 4.2) were used for selection. L2 rejection was enabled late in 2010, while EF rejection was not enabled during 2010 running as it was not needed [34].
Calibration constants that correct for the hadron response of the non-compensating calorimeters in ATLAS (hadronic energy scale) were not applied in the trigger during 2010 data-taking. As a result, the jet trigger algorithms applied cuts to energy variables at the electromagnetic scale, the scale for energy deposited by electrons and photons in the calorimeter. Figure 43 shows the ratio of the L2 jet E T to the offline jet E T as a function of the offline jet E T . Data and MC simulation agree well.   Table 9 The primary triggers in each of the jet trigger categories with their L1 threshold and approximate prescale factor for an instantaneous luminosity of ∼10 32 cm −2 s −1 (a prescale value of 1 means unprescaled). The trigger name contains the EF threshold value; the L2 threshold is 5 GeV lower

Jet trigger menu and rates
The principal jet triggers for an instantaneous luminosity of ∼10 32 cm −2 s −1 are listed in Table 9 for inclusive jets, forward jets, multi-jets, and total jet E T . The set of L1 prescales applied provided an approximately flat event yield as a function of jet p T . The L1 rates of the inclusive and multi-jet triggers are shown in Fig. 44. During 2010 running, the level of pileup was small enough not to have a visible effect on the rates, which were observed to rise linearly with instantaneous luminosity.

Jet trigger efficiency
The jet trigger efficiency was measured using the orthogonal trigger and bootstrap methods. For the lowest-threshold chains, the jet trigger efficiency was calculated using the orthogonal trigger method with events selected by the L1_MBTS_1 trigger (Sect. 6.1). For the higher thresholds, the bootstrap method was used. The systematic uncertainty in the plateau efficiencies is less than ∼1%. This efficiency determination [30] used jets that were reconstructed offline from calorimeter clusters at the electromagnetic scale, using the anti-k T jet algorithm [15] with R = 0.4 or R = 0.6, in the region |η| < 2.8. These jets were calibrated for calorimeter response to hadrons using parameters taken from the simulation, after comparison with the data [35]. Cleaning cuts were applied to suppress fake jets from noise, cosmic rays, and other sources. These cleaning cuts were designed to reject pathological jets with almost all energy coming from a very small number of cells, out-oftime cell signals, or abnormal electromagnetic components. These cuts are explained in detail in Ref. [36].
The efficiency of the L1_J30 jet trigger in the central region, |η| < 0.8, of the detector is shown in Fig. 45(a) as a function of offline jet p T for two different data-taking periods, the difference between the periods being that in periods G to I the LHC beam had a bunch train structure. The change in bunch structure had a small effect on the efficiency turn-on curve and a negligible effect on the efficiency in the plateau region. The efficiency of the L2_j45 trigger chain, which includes the L1_J30 trigger, is also shown in Fig. 45(a) for periods G to I, for which L2 rejection was enabled. Since the efficiency turn-on is significantly sharper for L2 than L1, the L2 thresholds were set 15 GeV higher Fig. 45 (a) Efficiency of the L1_J30 trigger as a function of offline jet transverse momentum (after applying hadronic calibration) for two different data-taking periods. For the second period the efficiency of the L2_j45 trigger is also shown. (b) Efficiency for several triggers, integrated over 2010 than the L1 values, reducing the overall trigger rate while ensuring that the L2 trigger reached full efficiency at the same p T value as the corresponding L1 trigger. Jet trigger efficiencies integrated over the whole year are shown in Fig. 45(b) for several chains as a function of the calibrated offline jet p T . Figure 46 shows the efficiency for two thresholds of the inclusive forward trigger. The efficiency plateaus at a lower p T than for central jet triggers due to different energy resolutions and different contributions from noise and pile-up. After reaching the plateau, the jet and forward jet triggers remain fully efficient to within ∼1%.
The total jet E T triggers require the E T sum of all jets in the event (defined as H T ) to be higher than a given threshold   Figure 47 shows the distribution of H T for events, triggered by an orthogonal muon trigger, that pass three different JE trigger thresholds, compared to predictions from the MC. The MC distributions are in agreement with the data.
In the initial phase of data-taking the jet triggers were limited to inclusive and multi-jet topologies, with no cuts on the relative directions of the jets. Near the end of the 2010 data-taking, additional triggers that require di-jets with large rapidity differences or small differences in azimuthal angle were implemented at L2. Figure 48 shows the φ distributions for di-jets at L2, indicating that these distributions are well described by the simulation.

Fig. 48
The φ between the highest p T and second highest p T jet in the event for jets reconstructed at L2

Taus
The ATLAS physics programme uses tau leptons for SM measurements and new physics searches. Being able to trigger on hadronic tau signatures is important for this part of the ATLAS physics programme. Dedicated trigger algorithms have been designed and implemented based on the features of hadronic tau decays: narrow calorimeter clusters and a small number of associated tracks. Due to the high production rate of jets with very similar features to hadronic tau decays, keeping the rate of tau triggers under control is particularly challenging.

Tau reconstruction and selection criteria
At L1 the tau trigger uses EM and hadronic calorimeter information within regions of 4 × 4 trigger towers ( η × φ ≈ 0.4 × 0.4) to calculate the energy in a core and an isolation region (Sect. 4.2).
At L2 selection criteria are applied using tracking and calorimeter information, taking advantage of narrowness and low track multiplicity to discriminate taus from jets. The L2 tau candidate is reconstructed from cells in a rectangular L2 RoI of size η × φ = 0.6 × 0.6 centred at the L1 RoI position. The L2 calorimeter algorithm first refines the L1 RoI position using the second layer of the EM calorimeter. It then selects narrow jets in the detector by means of a calorimeter shape variable determined only from the second layer of the EM calorimeter. The shape variable, R EM , is an where E cell is the energy of the calorimeter cell and R cell is the radius R (defined in Sect. 5.1) of the cell from the centre of the L2 RoI, which is squared (n = 2). Track reconstruction at L2 uses the SiTrack algorithm (Sect. 5.1), but to minimize the execution time, tracks are not extended to the TRT. Tracks with p T > 1.5 GeV are reconstructed in the L2 RoI.
Exploiting the same characteristics of narrowness and low track multiplicity, the EF selects 1-prong and multiprong decays, with different selection criteria, using algorithms that are similar to the offline reconstruction algorithms [1]. The EF tau candidate is reconstructed from cells in a rectangular region of size η × φ = 0.8 × 0.8 centred at the L1 RoI position. The position, transverse energy, and calorimeter shower shape variables of the EF tau candidate are calculated from cells of all calorimeter layers within this 0.8 × 0.8 region. An overall hadronic calibration [37] is applied to all cells, and a tau-specific calibration is applied to the tau trigger candidate. The EM radius shape variable used at the EF is defined by 2 with n = 1. Additional quality criteria are applied to tracks reconstructed in the RoI, and if more than one track is found a secondary vertex reconstruction is attempted.
The stability of the tau trigger selection variables against pile-up was evaluated by comparing the distributions of these variables for events passing the L1_TAU5 trigger from data-taking periods A-C with those from period I. Periods A-C contain a negligible amount of pile-up, while events from period I contain the largest amount of pile-up (Sect. 2) observed in 2010. The distributions of the two most important variables ( p iso T / p core T at L2 and R EM at EF) are shown in Fig. 49 for events with and without pile-up. The variable p iso T / p core T describes the ratio of the scalar p T sums of the tracks in an isolation ring (R = 0.1 to 0.3) and in the core area (R = 0.1). The plots show a small shift due to the presence of additional energy and tracks, but these variables are in general quite stable with respect to the pileup of two to three collisions per bunch crossing. The same behaviour was observed for other variables used for making the HLT decision.

Tau trigger menu and rates
Both single tau triggers and tau triggers in combination with electrons, muons, jets and missing energy signatures were present in the 2010 trigger menus. Tau signatures were used in combination with other triggers to keep rates low enough while maintaining acceptance for the physics processes of interest. Table 10 shows a subset of these items with their rates that represent the lowest threshold triggers that remained unprescaled at a luminosity of 10 32 cm −2 s −1 . Figure 50 shows the trigger rates for various L1 and HLT tau triggers as a function of instantaneous luminosity showing a linear increase of rates during 2010 running.

Tau trigger efficiency
Tau trigger efficiencies were measured using offline reconstructed tau candidates in events containing QCD jets. Since QCD jets are the biggest source of fake taus in data, a sample of jets reconstructed offline provides a useful reference for tau trigger performance measurements. For the L1 trigger efficiency determination, offline jets were reconstructed with the anti-k T algorithm (using parameter R = 0.4) and required to have at least one associated track. Figure 51(a) shows the efficiency of the L1_TAU trigger for these jets, as a function of the jet E T . Although the L1 trigger efficiency has a slower turn-on for jets than for true taus, due to candidates to pass the HLT tau16_loose trigger in a di-jet data sample, simulated QCD di-jets and a simulated tau signal sample, as a function of the offline tau p T the wider shower profile of QCD jets, above the turn-on region the performance is similar, as confirmed from MC simulation studies. The L1 trigger efficiency reaches a plateau value of 100% (to within a systematic uncertainty of ∼1%). Figure 51(b) shows the efficiency of the tau16_loose trigger for offline tau candidates in data, simulated di-jet events, and simulated signal τ events. Data events were selected by requiring two back-to-back jets (within 0.3 radians), balanced in p T (within 50% of the higher p T jet). The data sample was collected with jet triggers (Sect. 6.4). Bias related to the jet trigger selection was removed by randomly selecting one of the jets (tag jet) that passed the jet trigger and using the other jet (probe jet) to match to a reconstructed tau candidate. Reconstructed tau candidates that pass the tight offline identification requirements and match a probe jet ( R < 0.4) were used as the denominator of the efficiency measurement. The numerator was defined as the sub-set of those candidates that also passed the tau16_loose trigger. The efficiencies from data agree with those for the simulated di-jets, but have a slower turn-on than for the simulated signal sample. This is because of the lower L1 efficiency for jets than taus in the threshold region. The trigger efficiency for offline tau candidates with p T > 30 GeV is 94% with a total uncertainty of ∼5%. Measurements of the tau trigger efficiency from Z → τ τ and W → τ ν decays are consistent with the QCD jet measurement but, with 2010 data, have relatively large statistical uncertainties.

Missing transverse energy
The missing transverse-energy (E miss T ) signature is exploited in the measurement of the W boson and top quark [20,21,23] to provide information on the kinematics of neutrinos in the events. It is also extensively used in searches for new physics [24,25] including possibly new particles that are not directly detected [38]. The E miss T is estimated by calculating the vector sum of all energies deposited in the calorimeters, projected onto the transverse plane, corrected for the transverse energies of all reconstructed muons. The E miss T triggers [39] are designed to select events for which the measured transverse energy imbalance is above a given threshold. Triggers based on the scalar sum of the transverse energies ( E T ) are also used.

Reconstruction and selection criteria
During 2010, the E miss T and E T triggers used calorimetric measurements calibrated at the EM scale. In the L1 calorimeter trigger system trigger towers are used to compute both E miss T and E T over the full ATLAS acceptance (|η| < 4.9). The magnitude of E miss T is not calculated directly at L1, but rather is derived from a look-up table that takes the values of E x and E y (expressed in integer values in GeV) as inputs [39]. The resulting resolution smearing is ∼1 GeV. The noise suppression scheme adopted at L1 in 2010 was very conservative with a rather high E T threshold, in the range 1.0-1.3 GeV, applied to each trigger tower before computing the sums E x , E y and E miss T . The discreteness of the L1 approach is smoothed out at L2, where the E x and E y values from L1 are summed in quadrature and a threshold is placed on the magnitude of 2 . At L2, the L1 energy measurement can also be corrected using the measured momenta of detected muons in the event. Since the muon correction has only a small impact on trigger rates, for 2010 running the correction was calculated at L2 and the value of the correction stored in the event. However, this correction was not applied to the E miss T value calculated online, and thus was not used in the trigger decision.
Because recalculation of E miss T and E T using the full granularity of the calorimeters requires access to the whole event, it is only performed at the EF. Both E miss T and E T are estimated by the same algorithm, which loops over all calorimeter cells discarding those whose energy is negative or has a value less than three standard deviations of the noise distribution. For each of the cells with energy above threshold, an energy vector is defined whose direction is given by the unit vector starting from the nominal interaction point and pointing to the cell centre, with magnitude equal to the measured cell energy.

Menu and rates
There are eight L1 E miss T thresholds shown in Table 1. The L2 (EF) thresholds were set at least 2 GeV (10 GeV) higher than the corresponding thresholds at L1 to mask the reduced granularity of the look-up table and the effects of the slowly increasing efficiency at L1. For example the xe40 trigger has a 25 GeV threshold at L1 (L1_XE25) and a 30 GeV threshold at L2 (L2_xe30). To control the trigger rate as the instantaneous luminosity increased it was necessary to reduce the energy difference between the L1 and EF thresholds for some chains; these chains were suffixed with "tight" in the trigger menu, e.g. xe30_tight. For these triggers, the effect of the L1 efficiency turn-on extends above the EF threshold. The principal E miss T and E T triggers used in 2010 and their rates at a luminosity of 10 32 cm −2 s −1 are shown in Table 11. Figure 52 shows the impact of in-time pile-up on E miss T . The measured L1 and EF distributions are compared to a MC sample of minimum bias events simulated without pileup. The simulation reproduces the E miss T distributions for the bunch crossings with a single pp collision (N pv = 1). For data events with multiple collisions (0.6-2.0 collisions/BC) there is a visible broadening of the E miss T distribution reflecting an increase in E miss T due to pile-up. The E miss T trigger rates at L1 and the EF are shown in Fig. 53 for the xe40 trigger which has a 25 GeV threshold at L1 (L1_XE25) and a 40 GeV thresholds at the EF. The E miss T rate increase with luminosity is faster than linear, due to the effects of pile-up.

Resolution
The correlations between the trigger and offline values of E miss T and E T using uncalibrated calorimeter energies are  shown in Figs. 54 and 55. The offline calculations use an algorithm (MET_Topo) which sums the energy deposited in topological clusters [14]. Figure 54(a) shows the correlation between L1 and offline E miss T for events triggered by the mu13 trigger (Sect. 6.3). The L1 E miss T resolution is worse than offline, as expected, while the EF shows a good correlation and improved resolution with respect to L1, as seen in Fig. 54(b). Figure 55(a) shows the correlation between the L1 E T and that calculated by the offline algorithm MET_Topo for events selected by the mu13 trigger. L1 underestimates the E T particularly at low values, due to the rather conservative noise suppression (i.e. high trigger tower E T thresholds) employed at L1. The effect is to shift the energy scale at low E T values, as shown by the non-linear behaviour in Fig. 55(a).
The plot in Fig. 55(b) shows the correlation between the EF and offline values of E T . There is an offset of about 10 GeV for the values of E T computed at EF, as the of- E T approaches zero. The offset arises because of a one-sided noise cut applied by the trigger, compared to symmetric cuts applied offline. The main motivation for the choice made at the EF is to protect against large negative energy values, which could arise from read-out problems and which would constitute a source of fake E miss T . The choice of the online noise cut (of three times the r.m.s. noise) is a compromise between minimising the offset (a lower cut of twice the r.m.s. noise would give a much larger bias of ∼200 GeV) and maintaining sensitivity, since higher thresholds would cause a greater loss of the real signal [39].  well by the MC. The agreement with the simulation is not perfect for low energies; background events from QCD processes and W boson decays into taus, which subsequently decay into muons, are difficult to simulate precisely. Figure 56(b) shows the corresponding efficiency for the full trigger chain including a 40 GeV E miss T threshold at EF. The initial faster rise of the efficiency turn-on is dominated by the EF E miss T resolution whereas the slower rise approaching the plateau is due to the slower L1 turn-on. This behaviour is modelled well by the simulation. Once the plateau has been reached the E miss T triggers remain fully efficient within a negligible systematic uncertainty. Figure 57(a) shows the L1 efficiency turn-on for a nominal E T threshold of 50 GeV. The late turn-on, starting only at about 150 GeV in offline E T , results from an under-estimation of E T at L1 due to the noise suppression scheme, as described in Sect. 6.6.3. The efficiency reaches 90% at about 260 GeV. Data and MC agree reasonably well; the shift in the efficiency turn-on is due to small errors in the modelling of noise at the individual cell level in the simulation. Figure 57(b) shows the efficiency of the EF selection alone, not including L1 and L2. The EF efficiency reaches 90% at about 230 GeV. Once the plateau has been reached the E T triggers remain fully efficient within a negligible systematic uncertainty. Data and simulation agree well. More details can be found in Ref.

b-Jets
The ability to separate heavy flavour jets from light-quark and gluon jets is an important asset for many physics analyses, such as measurements in the top-quark sector and searches for Higgs bosons or other new physics signatures. The ability to identify b-jets in the ATLAS trigger system During the 2010 data-taking period, the lifetime triggers were not in active rejection mode and the muon-jet triggers were used to collect data to validate the lifetime triggers. The lifetime triggers will be used in 2011 to collect data for physics analysis. In this section a brief description of the muon-jet triggers is given, but the main focus is on the performance of the lifetime triggers.

b-Jets reconstruction and selection criteria
Muon-jet triggers were used to select events containing jets associated with a low p T muon. At L1 a combined muon-jet trigger, L1_MU0_JX (X = 5, 10, 15, 30, 55), required the lowest threshold muon trigger in combination with a jet. No topological matching between muon and jet is possible at L1. The HLT selection introduces a refinement of the muon selection (L2_mu4) and requires matching within R < 0.4 between the muon and the corresponding L1 jet. The selected jet sample is enriched in b-jets and is used to calibrate both trigger and offline b-tagging algorithms.
Lifetime triggers use tracks and vertices reconstructed at the HLT (in the region η < 2.5) to select a sample enriched in b-jets. These triggers are based on the impact parameters of tracks with respect to the reconstructed primary vertex. The HLT selection is based on inner detector tracks reconstructed within a L1 jet RoI. The lowest threshold b-jet trigger is b10 which starts from a L1 jet with a 10 GeV E T threshold (L1_J10).
At the HLT, the first step for the lifetime triggers is to find the location of the primary vertex. The coordinates of the primary vertex in the transverse plane are determined by the beamspot information which is part of the configuration data provided to the algorithm via the online conditions database. The beamspot position can be updated during a run based on information from the online beamspot measurement (Sect. 5.2). During 2010 running, when the lifetime triggers were not in active rejection mode, this update was initiated manually whenever the beamspot showed a significant displacement. The longitudinal coordinate of the primary vertex is determined on an event-by-event basis from a histogram of the z positions of all tracks in the RoI. The z position of the vertex is identified, using a sliding window algorithm, as the z position at which the window contains the most histogram entries. In the case of multiple primary vertices, this algorithm selects the vertex with the most tracks.
The transverse and longitudinal impact parameters are determined, for each track, as the distances from the primary vertex to the point of closest approach of the track, in the appropriate projection. The impact parameters are signed with respect to the jet axis determined by a track-based cone jet reconstruction algorithm. The impact parameter is positive if the angle between the jet axis and a line from the primary vertex to the point of closest approach of the track is less than 90 • .
Two different methods, likelihood and χ 2 taggers, both based on the track impact parameters, are then used to build a variable discriminating between b and light jets: Likelihood taggers: longitudinal and transverse impact parameters are combined, using a likelihood ratio method, to form a discriminant variable. χ 2 tagger: the compatibility of the tracks in the RoI with the beamspot is tested using the transverse impact parameter significance (defined as the transverse impact parameter divided by the transverse impact parameter resolution) [41]. The distribution of the χ 2 probability of the impact parameter significance for all the tracks reconstructed in an RoI is expected to be uniform for light jets, as tracks come from the primary vertex, while it peaks toward 0 for b-jets, which contain tracks that are not from the primary vertex. The χ 2 probability can, therefore, be used as a discriminant variable. It is set to 1 for RoIs that do not contain any reconstructed tracks.
Likelihood taggers are more powerful, in principle, but require significant validation from data as they rely on determining probability density functions that give the signal and background probabilities corresponding to a given impact parameter value. The χ 2 tagger, though less powerful, can be tuned more easily on data using the negative side of the transverse impact parameter distribution. This technique is used because the shape of the negative side of the distribution is determined only by resolution effects and there is Fig. 58 The χ 2 probability distribution before and after the beamspot measurement update in a data-taking period when the beamspot was significantly displaced with respect to the reference no significant contribution from highly displaced tracks in this part of the distribution.
The importance of the online beamspot measurement is demonstrated in Fig. 58 which shows the χ 2 probability distribution of the χ 2 tagger before and after a beamspot update in a data-taking period when the beamspot was significantly displaced with respect to the initial reference. In 2011 the beamspot will be updated automatically every few minutes because a transverse displacement of the beamspot can cause tracks in light-quark jets to artificially acquire large impact parameters and so resemble the tracks in b-jets.

b-Jets menu and rates
During the 2010 data-taking period the muon-jet triggers were the only b-jet triggers in active rejection mode, selecting the calibration sample. The lifetime triggers ran in monitoring mode, allowing for tuning in preparation for activation in 2011 running. Similar algorithms ran at both L2 and the EF.
The muon-jet triggers were maintained at a rate of about 7 Hz, using prescaling when luminosity exceeded 10 31 cm −2 s −1 . Prescaling of the triggers with lower jet thresholds was done in such a way as to collect a sample of events with a uniform jet transverse momentum distribution in the reconstructed muon-jet pairs. The uniformity of the distribution is important for a precise determination of the b-jet efficiency in a wide range of jet transverse momenta.

b-Jet trigger performance
The performance of the χ 2 tagger is shown in Table 12, which gives the rejection obtained from data collected with the b10 trigger and the efficiency obtained from simulation of b-jets with a similar p T distribution to the data. The efficiency measurement from simulation requires a tagged jet RoI matched with an offline jet ( R < 0.4). The offline jet is required to be associated with a true b quark ( R < 0.3) and identified by an offline tagger based on the secondary vertex transverse flight length significance. The data collected with the b10 trigger has been used to tune the χ 2 tagger ready for the activation of the b-jet trigger in 2011 data-taking. The tuning procedure is identical for L2 and EF and consists mainly of a parameterization of the transverse impact parameter resolution. The selection cuts applied at L2 and the EF are chosen to give the optimum overall balance of efficiency and rejection at each level, taking into account the different impact parameter resolutions of the L2 and EF tracking algorithms (Sect. 5.1). Figure 59(a) shows the L2 transverse impact parameter significance distribution for data, where the impact parameter is signed with respect to the jet axis. The negative side of this distribution is mainly due to tracks originating from light quarks decays, allowing the resolution to be studied using an almost pure sample of tracks coming from the primary vertex. A fit was made to the negative part of the impact parameter significance distribution using a double Gaussian function. The result of the fit is shown superimposed on the data points in Fig. 59(a). The same tuning procedure was applied separately to MC simulated data. The χ 2 probability distributions obtained using the parameterized resolution are shown in Fig. 59(b) for data and simulation. Data and MC simulation show reasonable agreement, although there are some differences at values of the χ 2 probability close to 0 and 1. A typical cut would be to select jets with a χ 2 probability less than 0.07. The peak at 1 reflects the choice of setting the χ 2 probability to 1 for RoIs that do not contain any reconstructed tracks.

B-Physics
The ATLAS B-physics programme includes searches for rare B hadron decays and CP violation measurements, as well as tests of QCD calculations through production and spin-alignment measurements of heavy flavour quarkonia and B baryons [42,43]. B-physics triggers complement the low-p T muon triggers by providing invariant mass based selections for J /ψ, Υ , and B mesons. There are two categories of B-physics triggers, topological and single RoI seeded, each one exploiting a different characteristic of the ATLAS trigger system to manage the event rates.

B-Physics reconstruction and selection criteria
Topological triggers require 2 muon RoIs to have been found at L1 and the HLT (see Sect. 6.3). The B-physics algorithms in the HLT then combine the information from the two muon RoIs to search for the parent J /ψ, Υ , or B meson, and a vertex fit is performed for the two reconstructed ID tracks. The requirement for two muons at L1 reduces the rate, but is inefficient for events where the second muon does not give rise to a L1 RoI because it has low momentum, or falls outside the L1 acceptance. Single RoI seeded triggers recover events that have been missed by the topological triggers by starting from a single L1 muon and finding the second muon at the HLT. In this approach, tracking is performed in a large region ( η × φ = 1.5 × 1.5) around the L1 muon. At L2, tracks found in this large RoI are extrapolated to the muon system. The algorithm searches for muon hits within a road around the extrapolated track; if enough hits are found then the track is flagged as a muon. At the EF the search for tracks within the large RoI uses the EF Combined strategy (Sect. 5.4) which starts from the Muon Spectrometer and then adds inner detector information. If a second track is found, it is combined with the first one to search for the parent di-muon object in the same way as in the topological trigger. This approach can also be used in FullScan (FS) mode (Sect. 5.1). The FS mode is particularly useful for triggering Υ events where the muons tend to be separated by more than the RoI size, but requires approximately 8 times more CPU time than the RoI approach.
In both approaches, a series of cuts can be made on the muon pair requiring: the two muons are opposite charge; the mass cuts J /ψ: 2.5 − 4.3 GeV, Υ : 8-12 GeV, B: 4-7 GeV, DiMu >0.5 GeV; a cut on the χ 2 of the reconstructed vertex. The mass cuts are very loose compared to the mass resolutions (∼40 MeV and ∼100 MeV for J /ψ and Υ respectively). In 2010 chains were run both with and without the opposite sign requirement and with and without a requirement on the vertex χ 2 . Table 13 gives an overview of the main B-physics triggers and their rates at a luminosity of 10 32 cm −2 s −1 . At this luminosity the mu4 trigger was prescaled by 1500 and the 2mu4 trigger was prescaled by 85. The single muon-seeded "DiMu" triggers needed to be prescaled by ∼20; however the topological triggers ran unprescaled. Figure 60 shows the rates for some of the triggers shown in Table 13 as a function of instantaneous luminosity.

B-Physics trigger efficiency
The efficiencies of the B-physics triggers have been measured from data using triggers in monitoring mode (Sect. 3). The efficiencies of the mu4_Jpsimumu trigger with respect to L1_MU0 and the 2mu4_Jpsimumu trigger with respect to L1_2MU0 are shown in Fig. 61(a) for events containing a J /ψ → μμ decay reconstructed offline with both muon's p T > 4 GeV. The efficiencies shown in Fig. 61(a) include the HLT muon trigger efficiencies and the efficiency of the subsequent J /ψ → μμ selection cuts. The efficiencies have been determined within a systematic uncertainty of less than 1%; statistical uncertainties are presented in the figures.
In order to show the efficiency of the J /ψ → μμ selection itself, independent of the muon trigger, Fig. 61(b) shows   Fig. 61 Efficiencies for J/ψ → μμ events selected offline as a function of the J/ψ p T for (a) the single RoI-seeded mu4_Jpsimumu trigger with respect to L1_MU0 and the topological 2mu4_Jpsimumu trigger with respect to L1_2MU0 and (b) the mu4_Jpsimumu trigger with respect to the mu4 trigger and the 2mu4_Jpsimumu with respect to the mu4 and 2mu4 triggers the efficiency of: the single RoI-seeded mu4_Jpsimumu trigger with respect to the mu4 trigger; the topological 2mu4_Jpsimumu trigger with respect to the 2mu4 trigger; and the topological 2mu4_Jpsimumu trigger with respect to the mu4 trigger. The mu4_Jpsimumu trigger has an efficiency of 85% with respect to mu4 including the efficiency to reconstruct the second muon at the HLT, which causes a reduction of efficiency for low p T J /ψ. The benefit of using single RoI triggers is shown by comparing the mu4_Jpsimumu trigger efficiency with the lower efficiency of 50% for the 2mu4_Jpsimumu trigger with respect to the mu4 trigger. The lower efficiency of the topological trigger results mainly from the requirement for a second L1 muon; the efficiency of the 2mu4_Jpsimumu trigger is 92% for events with a 2mu4 trigger.

Overall trigger performance
In this section the overall performance of the ATLAS trigger is presented. Overall trigger performance parameters include the total rates at each trigger level, the CPU processing time per event, and the load on CPU resources available at L2 and EF. To demonstrate these performance parameters, a run from period I was selected which took place during the last pp fill of 2010 and had instantaneous luminosities ranging from 0.85 × 10 32 cm −2 s −1 to 1.8 × 10 32 cm −2 s −1 . This run was 15 hours long and had an integrated luminosity of 6.4 pb −1 . The total L1, L2, and EF output rates are given in Fig. 62(a) as a function of instantaneous luminosity for the sample run from period I. By changing prescale factors as the luminosity fell, the trigger rates were kept stable throughout the run at ∼30 kHz (L1), ∼4 kHz (L2), and ∼450 Hz (EF). The prescale factor changes can be seen in the figures as discontinuities in the rate as a function of luminosity. Prescale factors at L2 and EF are changed at the same time, while L1 prescale factors are set independently. The output rates for each stream in the same run are given in Fig. 62(b). The relative fractions of each stream are tuned as a function of instantaneous luminosity in order to optimize the total rate and physics yield. ATLAS utilizes an inclusive streaming scheme, meaning that an event that fires a trigger in two different streams will be written twice, once in each stream, creating some overlap between different streams. The only pairs of streams that show a significant overlap (>1%) at L = 10 32 cm −2 s −1 Fig. 64 (a) Mean time per event and (b) fraction of trigger system CPU usage for L2 and EF as function of luminosity in the sample run are: Egamma-JetTauEtmiss 14%, Egamma-Muons 2%, and Muons-JetTauEtmiss 4%. At higher instantaneous luminosity, when the lower p T threshold items will have higher prescales, the Egamma-JetTauEtmiss overlap will decrease. The goal is to keep the total overlap between streams below 10%.

Timing
The timing performance of the individual algorithms has been discussed throughout the paper. Figure 63 shows the total processing time per event in the sample run for L2 and EF. Figure 64(a) presents the mean processing time per event at L2 and EF as a function of instantaneous luminosity; L2 is further subdivided into the mean time to retrieve data over the network from the Read out Buffers (ROB time) and the computational time taken by the algorithms (CPU time). The figure shows that L2 was running close to the design limit of ∼40 ms and EF was running at ∼400 ms, well below the design limit of ∼4 s. Figure 64(b), reporting the fraction of CPU used in the HLT farm, shows that the HLT farm was well within its CPU capacity. As was the case for the trigger rates, discontinuities in the CPU usage with luminosity are due to deliberate changes of prescale sets to control the trigger rate.

Outlook
The trigger menus for 2011 and 2012 running will cover instantaneous luminosities from ∼10 32 cm −2 s −1 to ∼5 × 10 33 cm −2 s −1 at √ s = 7 TeV with around 10-23 pp interactions per bunch crossing and a 50 ns bunch spacing. At these instantaneous luminosities the main triggers will select electrons and muons with p T above about 20 GeV, jets with p T above about 200 GeV, E miss T above about 50 GeV, as well as E miss T in combination with a tau or jet. The primary triggers are shown in Table 14 together with the L1 and HLT thresholds and predicted trigger rates for a luminosity of 10 33 cm −2 s −1 .
The table also shows the bandwidth allocation guidelines for each group of triggers. The primary triggers make up about two thirds of the output bandwidth. The remainder of the bandwidth is filled with supporting, commissioning, calibration, and monitoring triggers. Supporting triggers populate the largest part of the remaining bandwidth. For example, prescaled jet and photon supporting triggers provide an approximately flat event yield as a function of p T to be used for measurements limited by systematic uncertainties. In addition, a smaller fraction of bandwidth is allocated to commissioning triggers specifically intended for the further development of the trigger menu. The total number of triggers is reduced compared to 2010 menus, as many items necessary for commissioning or lower luminosities are removed.
In contrast to the rapid evolution in 2010, the 2011/12 LHC conditions will be increasingly stable, and changes in the trigger menu will be less frequent than in 2010. Daily changes will be limited to adjustments of prescales, mainly Table 14 The bandwidth allocation guidelines per trigger group for 2011 for a total rate of ∼200 Hz. For primary physics triggers, the L1 and HLT thresholds and predicted trigger rates are given for a luminosity of 10 33 cm −2 s −1 for monitoring and commissioning triggers. To improve the stability of the data recorded for physics analysis, changes to primary triggers and re-tuning of the menu is limited to monthly updates. The trigger will, however, continue to evolve to match LHC luminosity and beam conditions.

Conclusion
The ATLAS trigger system has been commissioned and has successfully delivered data for ATLAS physics analysis. Efficiencies, which meet the original design criteria, have been determined from data. These include overall trigger efficiencies of: greater than 99% for electrons and photons with E T > 25 GeV; 94-96% for muons with p T > 13 GeV, in the regions of full acceptance; greater than 90% for tau leptons with p T > 30 GeV; greater than 99% for jets with E T > 60 GeV. The missing E T trigger was fully efficient above 100 GeV throughout the 2010 data-taking period. Quantities calculated online, using fast trigger algorithms, show excellent agreement with those reconstructed offline. Data and simulation agree well for these quantities and for measured trigger efficiencies.
The trigger system has been demonstrated to function well, satisfying operational requirements and evolving to meet the demands of rapidly increasing LHC luminosity. Trigger menus will continue to evolve to fulfill future demands via progressive increase of prescales, tightening of selection cuts, application of isolation requirements, and increased use of multi-object and combined triggers. The excellent performance of the trigger system in 2010 and the results of studies confirming the scaling to higher luminosities give confidence that the ATLAS trigger system will continue to meet the challenges of running in 2011 and beyond.