LHCb potential to discover long-lived new physics particles with lifetimes above 100 ps

For years, it has been believed that the main LHC detectors can only restrictively play the role of a lifetime frontier experiment exploring the parameter space of long-lived particles (LLPs) - hypothetical particles with tiny couplings to the Standard Model. This paper demonstrates that the LHCb experiment may become a powerful lifetime frontier experiment if it uses the new Downstream algorithm reconstructing tracks that do not let hits in the LHCb vertex tracker. In particular, for many LLP scenarios, LHCb may be as sensitive as the proposed experiments beyond main LHC detectors for various LLP models, including heavy neutral leptons, dark scalars, dark photons, and axion-like particles.


Introduction
The Standard Model (SM) of particle physics stands as a robust and well-established theory, providing a framework for understanding the fundamental particles and their interactions.Despite its impressive success over more than five decades, the SM falls short in explaining numerous observed phenomena across the realms of particle physics, astrophysics, and cosmology.One avenue of extending the SM involves the introduction of particles with masses below the electroweak scale that interact with SM particles.These interactions are mediated by operators referred to as "portals" [1].Accelerator experiments have already ruled out large coupling strengths for such particles, earning them the moniker "Feebly Interacting Particles".Small coupling means long lifetimes, and therefore, they are also referred to as long-lived particles (LLPs).The concept of LLPs has gained increasing prominence in the last decade, as evidenced by a growing body of literature (see [1][2][3] a gorkavol@gmail.comb brij.kishor.jashal@cern.chc valerii.kholoimov@cern.chd kiselev883@gmail.come dmendoza@cern.chf maksym.ovchynnikov@cern.chg Arantza.Oyanguren@ific.uv.es h volodymyr.svintozelskyi@cern.ch i Jiahui.Zhuo@ific.uv.es and related references), with numerous experimental efforts dedicated to their discovery.
Initially, the primary approach to investigating LLPs involved utilizing the LHC's main detectors, namely CMS, ATLAS, and LHCb.However, these ongoing searches at the LHC face notable limitations that hinder their efficacy in probing LLPs [4][5][6].For instance, the inner trackers have relatively small dimensions, restricting the effective decay volume and, consequently, the probability of LLP decays occurring within it.Additionally, the proximity of these trackers to the production point results in substantial background contamination, necessitating stringent selection criteria that inevitably reduce the number of detectable LLP-related events.Another challenge arises from the limitations imposed by current triggering mechanisms, which require tagging of events at the LLP production vertex, often necessitating the presence of a high-p T lepton, meson, or associated jets.This pre-selection process further curtails the event rate with LLPs and constrains the range of LLP models amenable to investigation.For instance, the main production mode for GeV-scale Heavy Neutral Leptons (N ) involves the decay B → ℓ+N , where the momentum of the lepton ℓ is insufficient for triggering.
Recognizing these constraints, the scientific community has begun exploring alternative experiments beyond the confines of the LHC detectors [2], encompassing both collider-based setups situated near the LHC and beam dump experiments adopting a displaced decay volume concept.These latter experiments employ an extracted beam line aimed at a stationary target, offering greater flexibility in terms of geometric dimensions and circumventing the limitations imposed by the existing LHC detector searches.
Furthermore, in response to the challenges of detecting LLP's various innovative ideas have emerged to enhance the capabilities of ATLAS, CMS, and LHCb searches [7].These proposals encompass track-triggers that obviate the need for production vertex tagging and exploit displaced sections of the detector as an effective decay volume.For example, Ref. [8] explores the possibility of detecting decays occurring within the CMS muon chamber, albeit still requiring the presence of a high-p T prompt lepton.
This paper presents a method to significantly augment the reach of the LHCb experiment for probing LLPs by harnessing novel algorithms developed under the new LHCb trigger software scheme [9].In particular, the newly introduced Downstream algorithm [10] emerges as a pivotal tool for extending the search for LLPs with decay lifetimes significantly exceeding 100 ps.
The paper's structure is as follows: In Sec.2, we delve into the LHCb experiment, the trigger system, and the novel Downstream algorithm.Sec. 3 outlines expected signal signatures, encompassing production and decay modes specific to various models, while discussing the LHCb experiment's capacity to detect them.Sec. 4 scrutinizes anticipated background sources that could influence the search for Beyond the Standard Model (BSM) particles.Sec.5.1 provides an estimate of the signal yield, including a breakdown of anticipated efficiencies, along with a qualitative comparison with other experimental proposals.Sec.6 presents the sensitivities of the LHCb experiment, incorporating the Downstream algorithm across various LLP scenarios.Finally, Sec. 7 concludes the paper.

The LHCb experiment
The LHCb forward spectrometer is one of the main detectors at the Large Hadron Collider (LHC) accelerator at CERN, with the primary purpose of searching for new physics through studies of CP-violation and heavy-flavour hadron decays.It has been operating during its Run 1 (2011-2012) and Run 2 (2015-2018) periods with very high performance, recording an integrated luminosity of 9 fb −1 at center-of-mass energies of 7, 8, and 13 TeV and delivering a plethora of accurate physics results and new particles discoveries.
The upgraded LHCb detector, operational at present during the Run 3 of the LHC, has implied a major change in the experiment.The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods, in particular using new readout architectures.A full software trigger executed on Graphic Processor Units (GPU) also represents one of the main features of the new LHCb design, allowing the reconstruction and selection of events in real-time and widening the physics reach of the experiment.The main characteristics of the new LHCb detector are detailed in [11], and summarised in the following.As compared to the previous detector [12], one of the most important improvements concerns the new tracking system.The LHCb is comprised of a three subdetector tracking system (VErtex LOcator, Upstream Tracker, and SciFi tracker), a particle identification system, based on two-ring imaging Cherenkov detectors, hadronic and electromagnetic calorimeters, and four muon chambers.
The VErtex LOcator (VELO) is based on pixelated silicon sensors and is critical for determining the decay vertices of b and c flavored hadrons.The Upstream Tracker (UT) contains vertically segmented silicon strips and continues the tracking upstream of the VELO.It is also used to determine the momentum of charged particles and is useful to remove low-momentum tracks from being extrapolated downstream, thus speeding up the software trigger by about a factor of three.Tracking after the magnet is handled by the new scintillating fiber-based Scintillating Fiber detector (SciFi).Two Ring Imaging Cherenkov (RICH) detectors supply particle identification.RICH1 is mainly for lower momentum particles, and RICH2 is for higher momentum ones.The Electromagnetic Calorimeter (ECAL) identifies electrons and reconstructs photons and neutral pions.The Hadronic Calorimeter (HCAL) measures the energy deposits of hadrons, and four muon chambers M2-M5 are mostly used for muon identification.The angular coverage of the LHCb detectors ranges from 2 < η < 5. Figure 1 shows the LHCb upgrade detector.

Track types at LHCb
The tracking system of the LHCb experiment consists of three subsystems, VELO, UT, and SciFi, which are responsible for reconstructing charged particles.A magnet, with a bending power of 4 Tm, is also necessary to curve particle trajectories in order to measure their momentum, p.Its polarity can be inverted, and it is used to control systematic effects coming from detector inefficiencies.
Several track types are defined depending on the subdetectors involved in the reconstruction, as shown in Fig. 2.
The main track types considered for physics analyses are Long tracks: they have information from at least the VELO and the SciFi, and possibly the UT.These are the main tracks used in physics analyses and at all stages of the trigger; Downstream tracks: they have information from the UT and the SciFi, but not VELO.They typically correspond to decay products of K 0 S and Λ hadron decays; T tracks: they only have hits from the SciFi.They are typically not included in physics analysis.Nevertheless, their potential for physics has been recently outlined [14].
When simulating collision data, particle tracks meeting certain thresholds are defined to be reconstructible and have an assigned type according to the sub-detector reconstructibility.This is, in turn, based on the existence of reconstructed detector digits or clusters in the emulated detector, which are matched to simulated particles if the detector hits they originated from are properly linked [15].Requirements for long tracks imply VELO and SciFi reconstructibility, downstream tracks must satisfy the UT and SciFi reconstructibility, and T -tracks only require the SciFi one.

The High-Level Trigger (HLT)
The trigger system of the LHCb detector in Run 3 and beyond is fully software-based for the first time.It is comprised of two levels: HLT1 and HLT2, described in detail in Ref. [9; 16].Most notably, the HLT1 level has to be executed at a 30 MHz rate and, as such, suffers from heavy constraints on timing for event reconstruction.
The first HLT1 trigger performs partial event reconstruction in order to reduce the data rate.Tracking algorithms play a key role in fast event decisions, and the fact that they are inherently parallelisable processes suggests a way to increase trigger performance.Thus, the HLT1 has been implemented on a number of GPUs using the Allen software project [17], which allows to manage 4 TB/s and reduces the data rate by a factor of 30.After this initial selection, data is passed to a buffer system, which allows nearly real-time calibration and alignment of the detector.This is used for the full and improved event reconstruction carried out by HLT2.
Due to timing constraints, the LHCb implementation in the HLT1 stage has been based on partial reconstruction and focuses solely on long tracks, i.e., tracks that have hits in the VELO.This trigger thus significantly affects the identification of particles with long lifetimes, particularly for LLP searches in LHCb, where some of the finalstate particles are created further than roughly a metre away from the IP and thus outside of the VELO acceptance.A new algorithm [18; 19] has been developed and implemented to widen the reach of particle lifetimes of the HLT1 system.It is briefly described in the following.

The new Downstream algorithm
A fast and performant algorithm has been developed to reconstruct tracks that do not let hits in the VELO detector [18]. 1 It is based on the extrapolation of SciFi seeds (or tracklets) to the UT detector, including the effect of the magnetic field in the x coordinate.Search windows in the UT detector for hits that are compatible with tracks coming from the SciFi, and that are not used by other reconstruction algorithms, are considered.In addition, fake tracks originating from spurious hits in the detector are suppressed by a neural network with a unique hidden layer.The reconstruction efficiency for downstream tracks of the algorithm is about 70%, with ghost rates (random combinations of hits) below 20%.This has been verified for SM particles (Λ and K 0 S ) and for LLPs in the hidden sector, in the range 0.25 GeV/c 2 -4.7 GeV/c 2 , decaying into muons or two hadrons.The track momentum resolution at this stage is less than 6% [19], and the algorithm has a high throughput that fulfills the tight HLT1 time requirements.

Benchmark LLP models
Many models with LLPs exist.In this paper, some of the benchmark models recommended by the Physics Beyond Colliders (PBC) working group [2] will be considered, with the names being BCX : 1. Dark photons V (BC1 ), which have kinetic mixing with U Y (1) SM hyperfield.Below the EW scale, the coupling is given by the kinetic mixing parameter ϵ.
The dark photon phenomenology (how it is produced in proton-proton collisions and its decay modes) is taken from the Refs.[20; 21].
2. Higgs-like dark scalars S. Below the EW scale Λ EW , the couplings are parametrised by the S-Higgs mixing angle θ ≪ 1 and the coupling α of the hSS operator.For BC4, α = 0, while for BC5, it is fixed in a way such that Br(h → SS) = 0.01.The scalar phenomenology is taken from [22].It is worth mentioning the difference between this description and the one used in sensitivity studies of many past experiments [2; 3].The latter considered the so-called inclusive description of the production of the dark scalars from B mesons, when the branching ratio is approximated by the process b → s + S. It breaks down for large scalar masses m S ≳ 2 − 3 GeV (as QCD enters the non-perturbative regime, and also because of wrong scalar kinematics) and hence is inapplicable.Ref. [22] considers the exclusive description, when the branching ratio is the sum of various decay channels B → meson + S. 3. Heavy Neutral Leptons N coupled to the active neutrino ν α : ν e (BC6 ), ν µ (BC7 ), or ν τ (BC8 ).Below the EW scale, the coupling of HNLs to the SM is via the mass mixing with active neutrinos parametrised by the HNL-neutrino mixing angle U α .The phenomenology description is taken from [23], with minor changes concerning the transition of the description of semileptonic decay widths of HNLs from the exclusive description (when the total width sums up from widths into particular meson states) to the inclusive approach (when the total width is approximated by decay into quarks).4. Axion-like particles (ALPs).If defined at some scale Λ ALP > Λ EW , ALPs may couple to various pseudoscalar SM operators, including Chern-Simons density of the gauge fields or the axial-vector currents of the matter; the RG dynamics down to the ALP mass scale also induces other operators.For BC10, at Λ ALP , ALPs universally couple to the fermion axial-vector current, while for BC11, they couple to the gluon Chern-Simons density.The description of the production and decay modes of these ALPs is taken from [24].Thus, the phenomenology for BC10 significantly differs from the previously adopted description of ALP production and decay modes [2], where many production channels and hadronic decay modes have not been taken into account.The description of decays for BC11 somewhat differs from the other study [25], which results in a larger decay width (for the given ALP mass and coupling) and hence a smaller lifetime (see a discussion in Ref. [24]). 5. B − L mediator, which couples to the anomaly-free combination of the baryon and lepton currents.The coupling is given in terms of the structure constant α B .Its production and decay channels are the same as for dark photons up to the fact that the coupling is universal and there is no mixing with ρ 0 mesons [21].
Ref. [26] summarizes the main LLP's production and decay modes that are relevant for high-energy experiments.Mostly, they are produced directly in proton-proton collisions, decays of various SM particles, or via mixing with light neutral mesons.Therefore, most of them are rele-vant for LHCb.For convenience, the processes are listed in table 1.

Massive photon [V]
UB−L mediator Table 1.Summary of the production and decay modes of the LLPs considered in this paper.Here, X denotes any SM state.

Events selection
A potential event with LLPs is defined by the presence of the reconstructed decay vertex located between the end of VELO (z ≈ 1 m) and the beginning of the UT tracker (z UT ≈ 2.5 m), in the pseudorapidity range 2 < η < 5.
The vertex is reconstructed with the help of at least two tracks from decay products passing through both the UT and SciFi trackers.For the present study, only charged particles are considered detectable.Therefore, decays into solely neutral particles such as π 0 (→ 2γ), γ, K 0 L are treated as invisible.
As indicated in Table 1, while the majority of decay modes of LLPs are exclusive two-body decays, they may often decay into three or more particles.It is especially relevant for LLPs with m ≳ 1 GeV, which decay into quarks or gluons and hence produce a cascade of hadrons resulting from showering and hadronization.Fig. 3 illustrates the average multiplicity of metastable particles (those having decay lengths cτ p/m well exceeding the dimensions of LHCb) for selected models.
This feature necessitates a consistent approach to LLP reconstruction.Reconstructing the many-particle vertex by as few tracks as possible clearly maximizes the yield of reconstructed events.Namely, each track is reconstructed with finite efficiency, which results from the non-ideal performance of the detector, which introduces a finite detection efficiency and kinematics measurement resolution.However, reconstructing more particles from the vertex and using PID criteria,2 one may reveal the properties of the LLP and hence discern different LLP scenarios (see, e.g., [27]).Dashed: charged only.Solid: incl.γ,K 0 L Fig. 3.The average number of metastable decay products per LLP decay that may be detected -π ± , K ± , K 0 L , γ, e ± , µ ±as a function of the LLP mass, for the models of HNLs coupled to the electron neutrino, Higgs-like scalars with the mixing coupling, and ALPs coupled to gluons [26].The dashed lines assume that only charged decay products are detectable, while the solid lines also include uncharged decay products.For each case, the summation over all decay channels is performed, which may lead, in particular, to the dashed lines with n per decay < 2 if there are modes with decays into neutral particles only.Jumps in the behavior of the lines are caused by the kinematical opening of new decay channels.
For the present study, the main interest is in estimating the region of LLP's parameter space where the Downstream algorithm may see any signal.In this sense, it is enough to have two reconstructed tracks.The event reconstruction efficiency is then approximated by the squared reconstruction efficiency of the single track times the vertex reconstruction efficiency.The opportunities of the reconstruction by using many tracks will be studied in the future.
The event reconstruction performance of the Downstream algorithm is a subject of ongoing investigation.It includes, e.g., the momentum dependence of the track reconstruction efficiency and the two-downstream-track vertex resolution. 3For the reference selection in this paper, the particles will be required to have the energy E > 5 GeV, and transverse momentum p T > 0.5 GeV, and the overall event reconstruction efficiency, ϵ rec = 0.4 is considered.
Potential ways to enhance the event yield that will be studied in the future are worth mentioning.First, it may be significantly improved if extending the z range with the reconstructed vertex until the beginning of the first SciFi layer, which is located at z ≈ 7.7 m.Then, the vertices from z > z UT would be reconstructed with the help of the SciFi tracker only (i.e., using solely T tracks).Second, a sizable fraction of decays of LLPs may be into neutral particles such as γ and K 0 L .Some particles, such as light ALPs coupled to gluons and ALPs coupled to photons, decay solely into photons.Therefore, adding the option of reconstructing events using calorimeters would be essential for these LLPs.

Case study: dark scalars
Of particular interest is the dark scalar model denoted as BC4.These scalars can be generated through processes such as B → S + X s/d , where X q denotes a hadronic state containing the quark q.For m S ≪ m B and in the limit θ 2 ≪ 1, the collective branching ratio for these processes is of the order of 3.3θ 2 , the production threshold is, approximately, m B − m π ≈ 5.13 GeV/c 2 [22].Fig. 4 illustrates the scalar's decay probabilities as a function of its mass, normalized to unity.
Decays involving two muons and electrons are particularly pertinent for particles with masses below 1 GeV/c 2 , while the ππ/KK channels dominate within the 0.270-2 GeV/c 2 mass range.From a mass threshold of 2 GeV/c 2 onward, there is a proliferation in track multiplicity, coinciding with the opening of various channels such as gluongluon (gg), ss, cc, and τ + τ − .These channels assume particular importance due to the expectation of three or more "downstream" tracks originating from a common vertex.In the case of the cc decay channel, two D mesons and many pions will be produced as a result of showering and hadronization.The Ds will decay afterward, and formally, the event would be a bunch of soft hadrons from the LLP's decay vertex and two displaced hadron showers from Ds decays.However, the magnitude of the displacement, proportional to the decay length of D mesons, is well below the vertex resolution, so all the tracks should converge to the same origin.

Background sources
Background events that could mimic the BSM signal at LHCb are expected to arise from different sources [28].
They are listed below.Some of these background events can be studied with simulations [29], and other sources will be studied when Run 3 data is available.The main contributions are considered to come from: • Hadronic resonances: decays of light and heavy q q resonances into a pair of hadrons (h + h − ) or leptons (ℓ + ℓ − ) are highly suppressed since they decay promptly and from simulation studies no tracks are expected to be reconstructible as downstream tracks, neither if they come from the interaction point nor from decays of b and c hadron decays.Light resonances can be produced by particle interaction with the beam pipe or detector material, decaying into muons or pions.This background can be suppressed by using control samples from data and vetoing specific regions of the detector.
• Strange candidates: SM particles with long lifetimes (notably K 0 S and Λ) can also be mistaken as signal events.This could happen when the LLP is reconstructed in hadronic h + h − modes or for leptonic modes if the hadrons from the K 0 S , or the proton and pion from the Λ, are misidentified as muons 4 .This type of background can be rejected by imposing tighter particle identification (PID) criteria and by vetoing pairs of particles that, after being assigned the proton or pion mass hypothesis, lie in the invariant mass region of K 0 S and Λ candidates.
• Combinatorial background: random pairs of hadrons or leptons, associated or not with other particles from B-meson decays, could be wrongly attributed to LLP candidates.MC simulations show that the amount of combinatorial background drastically decreases with the mass of the LLP particle, being negligible for masses larger than 2 GeV.This is expected since high momentum tracks come from decays of b and charm hadrons.
Information on the two-tracks and B-meson candidates can be used in a multivariate analysis, in particular, making use of a boosted decision tree (BDT) or neural network (NN), which are very suitable to reduce this source of background.The vertex quality, impact parameter, transverse momenta, or track isolation criteria are examples of variables that are expected to be very discriminant in this type of analysis.
A NN classifier can be used to suppress the background events, with a threshold that can be varied according to the desired performance.Using simulated events [29], a background rejection rate larger than 99% and a signal efficiency of 87% can be obtained, assuming two-body decays for the latter.In this test the NN is trained using dedicated signal samples with BSM candidates, in particular using dark scalars with masses ranging between 400 MeV and 4500 MeV. Background events are obtained from minimum bias simulations 5 .Input variables are track properties of the reconstructed pairs (impact parameter, mo- 4 Decays of K 0 S to two leptons are highly suppressed in the SM, with branching fractions of order 10 −12 .
5 Collisions that occur without any specific selection criteria applied.
mentum and transverse momentum), vertex quality and position, and impact parameter, quality, and momentum of the reconstructed parent particle.
This background reduction is expected since, at large lifetimes, most of the background is coming from material interaction, which has a very different topology and kinematics than the signal.The rejection rate could be even higher if the LLP decays into multiple particles.
Secondary interactions of hadrons produced in beamgas collisions can be used to map the location of material as it is done in Ref. [30].With this procedure, the background can be reduced to a negligible level.

LLP events yield and qualitative comparison with other proposals 5.1 Signal yield
The LLP exploration power of the Downstream algorithm is estimated in the following.
To calculate the number of events with LLPs, it is necessary to know their production channels, the fraction of LLP flying in the direction of the detector, the decay probability, and the fraction of the decay events that may be reconstructed.The semi-analytic approach described in [26; 31] is used, which may be as accurate as pure Monte-Carlo evaluation, combining this with transparency and speed of calculations.The number of events is calculated as The quantities entering Eq. ( 1) are the following: -L is the total integrated luminosity corresponding to the operating time of the experiment.-σ (i) pp→LLP is the LLP cross section in proton-proton collisions, accounting for the probability that a specific process i takes place, e.g., decays of mesons, direct production by proton-target collisions, etc. -z, θ, and E are, respectively, the position along the beam axis, the polar angle, and the energy of the LLP.-f (i) (θ, E) is the differential distribution of LLPs produced in the process i in polar angle and energy.-ϵ az (θ, z) is the azimuthal acceptance: where ∆ϕ is the fraction of azimuthal coverage for which LLPs decaying at (z, θ) are inside the decay volume.For the specified setup, ϵ az = h(2 < η(θ) < 5), where h is the step function.-dP dec dz is the differential decay probability: with r = z/ cos(θ) being the modulus of the displacement of the LLP decay position from its production point, and l dec = cτ γ 2 − 1 is the LLP decay length in the lab frame (with τ being the lifetime in terms of the LLP mass and the coupling to the SM particles g).-ϵ det (m, θ, E, z) is the decay products acceptance, i.e., among those LLPs that are within the azimuthal acceptance, the fraction of LLPs that have at least two decay products that point to the detector and that may be reconstructed.Schematically, where j counts over the LLP decays into final states (with the branching ratio denoted as Br vis ) that are detectable.Depending on the presence of a calorimeter (EM and/or hadronic), they may encompass only those states featuring at least two charged particles, or also (if the calorimeters are present) the states with at least two neutral particles.For the Downstream algorithm, only charged decay products are considered as visible; this way, the acceptance estimates are conservative.Generically, the reconstructed decay may also include some neutral states such as photons and K 0 L .ϵ (geom) det denotes the fraction of visible decay products that point to the end of the detector (which is SciFi in the case of the Downstream setup), and ϵ (other cuts) det is the fraction of these decay products that additionally satisfy the remaining selection criteria (e.g., minimum energy requirement, etc.).
-ϵ rec ≈ 0.4 is the reconstruction efficiency, i.e., the fraction of the events that pass the azimuthal and decay acceptances criteria that the detector can successfully reconstruct (remind Sec.3.2).-Finally, ϵ S/B is the signal-preserving efficiency for the events that have been reconstructed, resulting from the background rejection.This efficiency is assumed to be 87% on average.
The number of events is calculated using Eq. ( 1), the Downstream setup in SensCalc code [26] is incorporated.A detailed discussion on the implementation and its validation by the comparison with the LHCb simulation framework can be found in Appendix A.1.In Table 2, the parameters of the setup used for the implementation are described.
Here and below, it is assumed that the search will be performed in the regime when the background is negligible, resulting from a high performance of the signal selection criteria using neural network techniques.

Comparison with LHC-based experiments
To understand the LLP exploration abilities of the new Downstream algorithm, it is necessary to compare the LLP event yields at LHCb with LHC-based experiments.For the reference cases of the latter, the FASER and FASER2 experiments [32; 33] are considered.FASER, a Forward Search Experiment at the LHC designed to study neutrinos and search for weakly interacting, light new particles, is a currently running experiment located 480 m downward the ATLAS interaction point, in the far-forward direction.FASER2 is a possible upgrade of FASER with increased geometric size.It may either be located at the same placement as FASER, or at the Forward Physics Facility [34]; the first setup is considered here.Apart from the fact that FASER is already running, this choice is motivated by the fact that FASER and FASER2 have the same capabilities in reconstructing the LLP kinematics (such as measuring the invariant mass and identifying the decay products) as LHCb.The operating time of FASER is LHC Run 3, while for FASER2, it is HL-LHC.
The list of the relevant parameters of the considered experiments is given in Table 2.For the LHCb experiment with the new Downstream algorithm a partial statistics of Run 3, L = 25 fb −1 , is considered when comparing with FASER 6 and the full statistics until Run 6 L = 300 fb −1 are assumed for LHCb when comparing with FASER2.A conservative configuration of the LHCb setup is considered with the effective decay volume from z = 1 m (the end of VELO) and until the UT layers.For the Downstream algorithm, it is required that the charged decay products have E > 5 GeV.For FASER and FASER2, the setups implemented in SensCalc are used without the requirement of any other selection criteria than the requisite for the decay products to pass through the detector.
Considering the limit when cτ ⟨γ⟩ ≫ ∆X exp , where ∆X exp is the geometric size of the whole experiment, from the production point and until the end of the detector, the differential decay probability (3) reduces to dP dec /dz ≈ 1/(l dec cos(θ)).The expression (1) becomes where ϵ (i) is the total acceptance for the given production channel: This quantity may be decomposed as where ⟨ϵ LLP ⟩ is a fraction of LLPs that intersect the decay volume, ∆z = 1.5 m is the longitudinal length of the effective decay volume, ∆z cτ ⟨(γ 2 − 1) −1/2 ⟩ is the mean decay probability for the LLPs intersecting the decay volume, and ⟨ϵ det ⟩ is the mean decay products acceptance for the LLPs decayed inside.The Eq. ( 5) is very convenient for the comparison since the dependence on the LLP lifetime factorizes out.In particular, given the coupling g of the LLP to the SM, the minimal possible value of g that may be probed is given by which follows from the scaling N (i) prod , τ −1 ∝ g 2 in Eq. ( 5).To understand the impact of different luminosities, angular coverage, and decay volume length, in Eq. ( 6), the setting ϵ = 1 is first applied, and then the various factors entering in the integrand sequentially included.The quantities that are compared are: I 0 -the total number of LLPs produced during the runtime of the experiment (ϵ = 1); I 1 -the fraction of LLPs pointing to the decay volume (only f (i) ϵ az /∆z is included in Eq. ( 6)); I 2 -the fraction of the LLPs decaying inside (all the factors except for ϵ det • ϵ rec • ϵ S/B are included); I 3 -the fraction of the decay events which pass the reconstruction (all the factors are included).
In Fig. 5, the expression to obtain the number of events (5) for the model of dark scalars and heavy neutral leptons coupled to the electron flavor are compared.Decays of B and D (for HNLs) mesons produce these particles, while their visible decays are leptonic, hadronic (for scalars), or semileptonic (for HNLs) [22; 23].Because of the similar proton collision energy, the only difference in I 0 comes from different integrated luminosities accumulated during the runtime of the experiments; the ratio is constant and equal to I 0,Downstr /I 0,FASER/FASER2 ≈ 0.08/0.1 for the two luminosity values that are considered (and we do now show it in the plot).These are larger for FASER and especially for FASER2.However, a smaller angular coverage of the latter experiments means that a much smaller fraction of the produced particles would fly to the decay volume (I 1 ).The decay probability approximately scales as ∆z •⟨p −1 ⟩.Overall, this ratio is much smaller at FASER and FASER2 experiments: the LLPs flying in the farforward direction have mean momenta O(1 TeV/c), while LLPs within the angular coverage of LHCb typically have p ∼ 50 − 100 GeV/c.Including the decay products acceptance ϵ does not lead to a qualitative change in the ratio of the number of events, especially if most of the decay modes contain at least two charged particles.In the case when there are only uncharged particles, it is conservatively assumed that it is not possible to reconstruct them with the Downstream algorithm, while FASER/FASER2 are equipped with the calorimeter and hence may reconstruct such modes.
Moreover, allowing the LLPs to decay between the UT and the SciFi layers, with the reconstruction of faraway tracks (T -tracks), will increase the decay probability even further.
It is also useful to compare the sensitivity to "shortlived" LPPs, i.e., to those for which the typical decay length is similar to the distance to the decay volume z min , cτ ⟨p⟩/m ≲ z min .In this case, the scaling of the number of events with g is mainly due to the exponentially suppressed decay probability P dec ≈ exp[−z min m/cτ p].The scaling of the maximal value of the probed g may be roughly estimated as g upper ∝ ⟨p⟩/z min [31].Taking into account that the LLPs at FASER/FASER2 and LHCb have the momenta of the order of 1 TeV/c and 100 GeV/c correspondingly, and using z min from Table 2 g Downstream upper g FASER/FASER2 upper ∼ 50 (9) is obtained.
To summarize, for the exploration power of extremely long-lived particles, the LHCb experiment with the inclusion of the new Downstream algorithm would perform much better than FASER and comparable to FASER2.In the parameter space where LLPs are short-lived, such that they decay before reaching the decay volume, the algorithm would deliver a better sensitivity because of a much smaller distance to the decay volume.

Sensitivity to LLPs
To estimate the sensitivity, it is required N events > 2.3, which corresponds to the 90% CL limit if assuming that the background is negligible [35; 36] (remind Sec. 4).Two values of the integrated luminosities will be considered: L = 25 fb −1 , corresponding to partial statistics accumulated during Run3 with the Downstream algorithm available, and L = 300 fb −1 , corresponding to the full HL-LHC phase.
The sensitivities to the benchmark models described in Sec. 3 are shown in Figures 6-9.For comparison, the fig- Fig. 5.The ratio of the quantities Ii (see the text for definition) for the events at LHCb-Downstream and FASER (solid lines) or FASER2 (dashed lines) for the models of heavy neutral leptons mixing with νe (the left panel) and dark scalar mixing with the Higgs boson (the right panel).The ratios have been computed using SensCalc [26].
ures show the sensitivities of FASER and FASER2 experiments from [3], as well as various LHCb searches from [37; 38].
The considered LLPs have very different phenomenology, which determines the different status of the exclusion of their parameter space by past experiments.For some of them, the unconstrained parameter space includes only the domain of large lifetimes cτ ≫ 1 m.For the other ones, lifetimes cτ ≲ 1 m also remain unexplored.It is because, on the one hand, limitations of the past prompt searches in luminosity and efficiency, which leaves small couplings unconstrained, and on the other hand, parametric smallness of the lifetime which prevented the past beam dump experiments with the far placement of the decay volume, e.g., CHARM, to be able to search for such LLPs.One of the powers of the Downstream setup is that it may search for LLPs in both these regimes.
For dark photons and B − L mediators (Fig. 6), the second scenario is realized.In particular, in the mass range m V ≲ 0.6 GeV/c 2 , there is an underexplored parameter space of short lifetimes cτ ≲ 1 m.This mass range may be complementarily probed by various searches at LHCb, including the Downstream setup and the searches for resonance in di-electron and di-muon invariant mass restricted by VELO [37].Depending on the luminosity, it may be able to search for masses m V ≲ 1 GeV/c 2 .The upper bound of the sensitivities of FASER and FASER2 lies well below the sensitivity of Downstream, in good agreement with the estimate (9).The disconnected sensitivity regions in Fig. 6 appear due to the interplay between the behaviors of the LLP production rate and its lifetime.For these mediators, cτ • g 2 is parametrically very small, which requires a decrease in g 2 to make the LLPs possible to reach the decay volume before decaying.On the other hand, this would lead to a decrease in the production cross-section σ pp→LLP ∝ g 2 .Parametrically, the ratio σ pp→LLP /g 2 is too small in the mass range 0.5 GeV/c 2 ≲ m ≲ 0.6 GeV/c 2 to compensate for this decrease.However, it gets enhanced around the masses of ρ/ω mesons and their excitations (due to the mixing of the dark photons and B mediator with ω, ρ, ϕ [21]).
Higgs-like scalars are efficiently produced by decays of B mesons.Apart from using the Downstream setup, it may be possible to search for them at LHCb by studying processes of the type B → K ( * ) + S(→ µµ) localized in VELO, where S would manifest itself via a resonant contribution in the dimuon invariant mass [28; 39].Compared to the projections of the future reach of this type of search as reported in [38], the Downstream setup would cover the lifetimes in two orders of magnitude larger (see Fig. 7).The main reason for this is a suppression in the event rate by the reconstruction efficiency for B → K ( * ) + S(→ µµ) (coming from the p T cut on the outgoing muons, reconstruction of the kaon, and the requirement for the reconstructed B decay vertex to be sufficiently displaced), the branching ratio Br B→K+S ≈ Br B→X+S /8, the effective decay volume limited by VELO, and the branching ratio S → µµ (remind Fig. 4).
As for the comparison with FASER/FASER2, for the model BC4 (zero trilinear coupling hSS), the obtained results are in agreement with the qualitative estimates made in Sec.5.2.Compared to FASER, the Downstream setup may deliver a much better sensitivity.As for FASER2, the Downstream sensitivity would probe the same or slightly larger lifetimes at the lower bound, while for the upper bound, the probed domain is extended to the range of smaller lifetimes, thanks to a much shorter distance to the decay volume.In the case of a non-zero hSS coupling (BC5 ), scalars may be produced by the decays B s → SS and B → SSX and the 2-body Higgs boson decays h → SS.The experiment may be searching for such scalars up to the production threshold from Higgs bosons, m S < m h /2, again thanks to a very small distance to the decay volume.This is impossible at FASER, while the reach of FASER2 is limited to the vicinity of the kinematic threshold m S ≃ m h /2 due to the suppression in the number of scalars pointing to the detector [40].
For HNLs N (Fig. 8), there are three mass domains depending on the main production channel -by the decays of The sensitivity of future LHCb searches restricted by VELO is taken from [37], while the excluded parameter space and the sensitivity of FASER and FASER2 experiments is taken from [3].For the Downstream algorithm, in this and subsequent figures, two values of the integrated luminosity are assumed: 25 fb −1 , corresponding to the partial statistics of Run 3, and 300 fb −1 , which is the full statistics of Run 6.For the description of the models, see Sec. 3 and Ref. [26].See the text for the discussion on the sensitivity.Unlike the dark scalar case, there is no possibility to utilize the signature B → K + N (→ µµ) for HNLs.First, HNLs are fermions, and the angular momentum conservation, together with the HNL interaction properties, requires the presence of an additional lepton in the B decay.The probability of such process, B s → K + N + ℓ, is very suppressed [23].Finally, the only HNL decay with the dimuon state is a three-body process N → µµν; as a result, the dimuon mass distribution is not resonant.

Excluded
The comparison with FASER/FASER2 shows the same pattern as in the case of dark scalars, again reproducing qualitative conclusions of Sec.5.2.
For the ALPs with the universal coupling to fermions (Fig. 9), BC10, the situation is very similar to the case of dark scalars since the dominant production channel is the same -decays of B mesons, while the decays into fermions have the similar Yukawa-like hierarchy: the corresponding decay width scales as Γ a→f f ∝ m 2 f .The gaps in the sensitivity correspond to the vicinity of the masses of the neutral light mesons m 0 = π 0 , η, η ′ where the description of the ALP phenomenology based on the mixing with these mesons becomes inadequate.
In the case of the ALPs coupled to gluons (BC11 ), the mixing becomes the main production channel.This results in a worse sensitivity of the Downstream setup compared to FASER2.Indeed, m 0 s have a very narrow angular distribution -their characteristic p T is of the order of Λ QCD .Given the typical energies of the order of TeV, the angular flux of mesons starts falling at θ < 1 mrad, i.e. well below the angular coverage of LHCb but within the range ALPs coupled to gluons (BC11) Fig. 9.The sensitivity to the ALPs universally coupled to fermions (BC10, the left panel) and to gluons (BC11, the right panel).The sensitivity of FASER2 and the excluded parameter space are taken from [3].For the discussion of sensitivity, see the text.
of FASER2.In addition, an important decay channel of these ALPs (in the mass range m a ≲ m η ) is into a pair of photons [25], which are conservatively not considered as visible particles for the Downstream setup.Still, however, at the upper bound of the sensitivity, it would provide much better opportunities.
It is important to stress again (remind Sec. 3) that the description of the ALP phenomenology considered in this paper differs from the description used to calculate the sensitivity of FASER2, which makes the direct comparison more complicated.
The sensitivities to all the LLPs considered in this paper may be improved if the effective decay volume extends from the end of the UT and until the SciFi layers.At present, work is being developed to include fareway tracks, with only hits in the SciFi, and perform a fast vertexing at the HLT1, keeping a high throughput.This will extend the LLP search potential of LHCb even further.Finally, it is important to consider the Downstream algorithm over a landscape of future experiments.As a reference model example for the comparison, Higgs-like scalars are chosen, because its production mode -decays of B mesons -is representative for many other LLPs, such as HNLs and ALPs, and it may be possible to search for them at many experiments to be located at different facilities.The comparison of sensitivities is shown in Fig. 10.The included experiments are recently approved SHiP, FASER, FASER2, MATHUSLA, and CODEX-b.The sensitivity of CODEX-b is taken from [3], the sensitivity of MATH-USLA from [43], while the sensitivity of SHiP is computed using SensCalc.The comparison is tricky, since the experiments may fall into different categories: already approved or at the stage of proposals (CODEX-b, MATH-USLA, FASER2); be equipped with the full detector or with just tracking layers (MATHUSLA), which is crucial for identifying the LLP; to be running at different times.Namely, while the Downstream algorithm is going to be run already in 2024, while FASER is already collecting the data, the timescale for the other experiments is rather shifted: SHiP is expected to run after 2030 [44], MATH-USLA and CODEX-b -during the High luminosity phase of the LHC [3].This way, it is seen that the Downstream algorithm is the best experiment to search for LLPs in the next few years.

Conclusions
The current search strategies employed at the LHC's primary detectors, namely ATLAS, CMS, and LHCb, are not well-suited for exploring the parameter space associated with hypothetical long-lived particles (LLPs) in the GeV mass range.Consequently, there has been a surge in proposals for experiments beyond the LHC dedicated to the search for LLPs.This study demonstrates the potential of efficiently harnessing the capabilities of the LHCb experiment by implementing a novel Downstream algorithm.This approach enables the exploration of events lacking hits in the innermost LHCb tracker.In comparison to the existing search methods employed by LHCb, this algorithm offers the advantages of triggering at the production vertex, enhanced background control, an expanded effective decay volume, and the ability to investigate various final states resulting from the decays of LLPs.
The Downstream setup holds promise for the investigation of a diverse range of LLPs, potentially rivaling the exploration potential of established LHC-based experiments like FASER2 (refer to Sec. 3).Leveraging the complete dataset from LHCb until Run 6, it becomes feasible to probe Heavy Neutral Leptons (HNLs) with masses up to approximately 20 GeV/c 2 , as well as dark photons and B − L mediators with masses of around 1 GeV/c 2 .Moreover, this approach extends the search to Higgs-like scalars with lifetimes exceeding those accessible by the current LHCb search strategies, and to axion-like particles with various coupling patterns (as outlined in Sec. 6).Further enhancements in sensitivity can be achieved by enlarging the effective decay volume and incorporating the possibility of reconstructing final states comprising exclusively photons, contingent upon the development of new triggers.

A Implementation of the setup for the Downstream algorithm in SensCalc
The LHCb with the Downstream setup has been implemented in the SensCalc framework to estimate the number of events and allow for comparisons.The implementation is shown in Fig. 11, and details are given in the following.
For the decay volume, conical frustum covering pseudorapidities 2 < η < 5 and located in the longitudinal displacement z from z min = 1 m to z max = 7.7 m is considered, where the first SciFi layer is located.If the tracks must also intersect the UT, the size of the decay volume shrinks to z max ≈ 2.5 m, which is the beginning of the UT.For the geometry of the Sci-Fi layers, a parallelepiped with dimensions 6.48 m × 4.83 m × 1.7 m with a hole of the radius R = 9 cm to account for the beam pipe is used, following [11].The magnetic field of the dipole magnet is extended from z = 3.5 m to z = 7.5 m, with the integrated field Bdl = 4 T • m.
The setup is available with the current SensCalc repository [45].Depending on details, there are three implemented options: -LHCb-downstream -LHCb-downstream-T-tracks-only -LHCb-downstream-full The first one corresponds to the setup considered in this paper -the one with the decay volume extending from z = 1 m to z = 2.5 m and SciFi as the detector.The second option also includes the domain 2.5 < z < z SciFi as the decay volume; it corresponds to the scenario when the event may be reconstructed purely by T tracks.Finally, the last one is a sketch of the full LHCb detector up to muon stations (see Fig. 1).Users may easily add new configurations or modify the existing ones.

A.1 Validation
To validate the prediction of SensCalc, the event rate for the dark scalar mixed with the standard Higgs boson is analysed.Specifically, the acceptance for the dark scalars to have 2 < η < 5 and the z-dependence of the pure geometric part of the decay products acceptance (i.e., with ϵ rec = 1) is studied, which is defined as and results compared with the LHCb simulations.Simulations in this work are performed using a specific package called RapidSim [46], an application for fast simulation of phase space decays of heavy hadrons, which allows for quick studies of the properties of signal and background decays in particle physics analyses.It includes realistic production kinematic distributions, efficiencies, and momentum resolutions.
As it is shown in Fig. 12, a good agreement is obtained between the acceptance predicted by SensCalc and the RapidSim simulation.(10) if assuming ϵrec = 1, as estimated by SensCalc (blue) and predicted by the RapidSim simulations (red) [46].See text for details.

Fig. 2 .
Fig. 2. Definition of the particle track types in the LHCb experiment, according to which detectors are hit.The different tracker layers and the magnet in the center are sketched.

Fig. 4 .
Fig.4.Decay probabilities of a dark scalar into different channels as a function of its mass and normalised to unity[22].

Fig. 6 .
Fig.6.Sensitivity to dark photons (BC1, the left panel) and B − L mediators (the right panel) in the plane LLP mass-LLP coupling.The sensitivity of future LHCb searches restricted by VELO is taken from[37], while the excluded parameter space and the sensitivity of FASER and FASER2 experiments is taken from[3].For the Downstream algorithm, in this and subsequent figures, two values of the integrated luminosity are assumed: 25 fb −1 , corresponding to the partial statistics of Run 3, and 300 fb −1 , which is the full statistics of Run 6.For the description of the models, see Sec. 3 and Ref.[26].See the text for the discussion on the sensitivity.

Fig. 8 . 1 FASER2
Fig. 8. Sensitivity to HNLs coupled solely to νe (the top left panel), νµ (the top right panel), and ντ (the bottom panel).The parameter space excluded by past experiments, as well as the sensitivity of FASER2, are taken from[3].The bottom gray domain below the short-dashed line corresponds to the parameter space excluded by BBN[41; 42].

Fig. 10 .
Fig. 10.Comparison of the sensitivities of future proposed and approved experiments to the model of Higgs-like scalars (BC4).See text for details.

Fig. 11 .Fig. 12 .
Fig. 11.The geometry of the LHCb as implemented in SensCalc.The thick black point corresponds to the origin of the coordinate frame, coinciding with the point of pp collisions.The blue region corresponds to the decay volume, while the red one is the detector.The green plane shows the location of the UT layers; if the tracks are also required to intersect the UT, the decay volume shrinks to the domain until the UT plane.

Table 2 .
Setups of the LHCb with the Downstream algorithm and the FASER and FASER2 experiments used for the comparison of the signal rates.The columns are: the name of the experiment, the integrated luminosity, the minimal and maximal longitudinal displacement covered by the decay volume, the minimal and maximal angles covered by the decay volume, and the selection criteria imposed on the LLP decay.Two different luminosities are considered for the Downstream algorithm in order to make a proper comparison with FASER/FASER2 (see text for details).