Expression of Interest for the CODEX-b Detector

This document presents the physics case and ancillary studies for the proposed CODEX-b long-lived particle (LLP) detector, as well as for a smaller proof-of-concept demonstrator detector, CODEX-$\beta$, to be operated during Run 3 of the LHC. Our development of the CODEX-b physics case synthesizes"top-down"and"bottom-up"theoretical approaches, providing a detailed survey of both minimal and complete models featuring LLPs. Several of these models have not been studied previously, and for some others we amend studies from previous literature: In particular, for gluon and fermion-coupled axion-like particles. We moreover present updated simulations of expected backgrounds in CODEX-b's actively shielded environment, including the effects of post-propagation uncertainties, high-energy tails and variation in the shielding design. Initial results are also included from a background measurement and calibration campaign. A design overview is presented for the CODEX-$\beta$ demonstrator detector, which will enable background calibration and detector design studies. Finally, we lay out brief studies of various design drivers of the CODEX-b experiment and potential extensions of the baseline design, including the physics case for a calorimeter element, precision timing, event tagging within LHCb, and precision low-momentum tracking.


Executive summary
The Large Hadron Collider (LHC) provides unprecedented sensitivity to short-distance physics. Primary achievements of the experimental program include the discovery of the Higgs boson [1,2], the ongoing investigation of its interactions [3], and remarkable precision Standard Model (SM) measurements. Furthermore, a multitude of searches for physics beyond the Standard Model (BSM) have been conducted over a tremendous array of channels. These have resulted in greatly improved BSM limits, with no new particles or force carriers having been found.
The primary LHC experiments (ATLAS, CMS, LHCb, ALICE) have proven to be remarkably versatile and complementary in their BSM reach. As these experiments are scheduled for upgrades and data collection over at least another 15 years, it is natural to consider whether they can be further complemented by one or more detectors specialized for wellmotivated but currently hard-to-detect BSM signatures. A compelling category of such signatures are long-lived particles (LLPs), which generally appear in any theory containing a hierarchy of scales or small parameters, and are therefore ubiquitous in BSM scenarios.
The central challenge in detecting LLPs is that not only their masses but also their lifetimes may span many orders of magnitude. This makes it impossible from first principles to construct a single detector which would have the ultimate sensitivity to all possible LLP signatures; multiple complementary experiments are necessary, as summarized in Fig. 1.
In this expression of interest we advocate for CODEX-b ("COmpact Detector for EXotics at LHCb"), a LLP detector that would be installed in the DELPHI/UXA cavern next to LHCb's interaction point (IP8). The approximate proposed timeline is given in Fig. 2: Here "CODEX-β" refers to a smaller proof-of-concept detector with otherwise the same basic geometry and technology as CODEX-b.
The central advantages of CODEX-b are: • Very competitive sensitivity to a wide range of LLP models, either exceeding or complementary to the sensitivity of other existing or proposed detectors; • An achievable zero background environment, as well as an accessible experimental location in the DELPHI/UXA cavern with all necessary services already in place; • Straightforward integration into LHCb's trigger-less readout and the ability to tag events of interest with the LHCb detector; • A compact size and consequently modest cost, with the realistic possibility to extend detector capabilities for neutral particles in the final state.
We survey a wide range of BSM scenarios giving rise to LLPs and demonstrate how these advantages translate into competitive and complementary reach with respect to other proposals. We furthermore detail the experimental and simulation studies carried out so far, showing that CODEX-b can be built as planned and operate as a zero background experiment. We also discuss possible technology options that may further enhance the reach of CODEX-b. Finally, we discuss the timetable for the construction and data taking of CODEXβ, and show that it may also achieve new reach for certain BSM scenarios.

Motivation
New Physics (NP) searches at the LHC and other experiments have primarily been motivated by the predictions of various extensions of the SM, designed to address long-standing open questions. These include e.g. the origin and nature of dark matter, the detailed dynamics of the weak scale, the mechanism of baryogenesis, among many others. However, in the absence of clear experimental NP hints, the solutions to these puzzles remain largely mysterious. Combined with increasing tensions from current collider data on the most popular BSM extensions, it has become increasingly imperative to consider whether the quest for NP requires new and innovative strategies: A means to diversify LHC search programs with a minimum of theoretical prejudice, and to seek signatures for which the current experiments are trigger and/or background limited. A central component of this program will be the ability to probe 'dark' or 'hidden' sectors, comprised of degrees of freedom that are 'sterile' under the SM gauge interactions. Hidden sectors are ubiquitous in many BSM scenarios, and typically may feature either suppressed renormalizable couplings, heavy mediator exchanges with SM states, or both. 1 The sheer breadth of possibilities for these hidden sectors mandates search strategies that are as model-independent as possible.
Suppressed dark-SM couplings or heavy dark-SM mediators may in turn give rise to relatively long lifetimes for light degrees of freedom in the hidden spectrum, by inducing suppressions of their total widths via either small couplings, the mediator mass, loops and/or phase space. This scenario is very common in many models featuring e.g. Dark Matter (Sects. L , π ± , neutron and muon, whose Footnote 1 continued the 'Higgs portal' is particularly compelling, because our theoretical understanding of Higgs interactions is likely incomplete, and new states might interact with it. In addition, the Higgs itself may have a sizable branching ratio to exotic states since its SM partial width is suppressed by the b-quark Yukawa coupling. Experiments capable of leveraging large samples of Higgs bosons are then natural laboratories to search for NP. Understanding the properties of the Higgs sector will be central to ongoing and future particle physics programs. widths are suppressed by the weak interaction scale required for flavor changing processes, as well as phase space. Vestiges of hidden sectors may then manifest in the form of striking morphologies within LHC collisions, in particular the displaced decay-in-flight of these metastable, light particles in the hidden sector, commonly referred to as 'long-lived particles' (LLPs). Surveying a wide range of benchmark scenarios, we demonstrate in this document that by searching for such LLP decays, CODEX-b would permit substantial improvements in reach for many well-motivated NP scenarios, well beyond what could be gained by an increase in luminosity at the existing detectors.

Experimental requirements
In any given NP scenario, the decay width of an LLP may exhibit strong power-law dependencies on a priori unknown ratios of various physical scales. As a consequence, theoretical priors for the LLP lifetime are broad, such that LLPs may occupy a wide phenomenological parameter space. In the context of the LHC, LLP masses from several MeV up to O(1) TeV may be contemplated, and proper lifetimes as long as 0.1 seconds may be permitted before tensions with Big Bang Nucleosynthesis arise [4][5][6].
Broadly speaking, the ability of any given experiment to probe a particular point in this space of LLP mass and lifetimes will depend strongly not only on the center-of-mass energy available to the experiment, but also on its fiducial detector volume, distance from the interaction point (IP), triggering limitations, and the size of irreducible backgrounds in the detector environment [7]. The latter is large for light LLP searches, requiring a shielded, background-free detector. Further, LLP production channels involving the decay of a heavy parent state -e.g. a Higgs decay -require sufficient partonic center-of-mass energy, √ŝ , to produce an abundant sample of heavy parents. Such channels are thus probed most effectively transverse to an LHC interaction point. Taken together, these varying requirements prevent any single experimental approach from attaining comprehensive coverage over the full parameter space.
Experimental coverage of LLP searches is also determined by the morphology of LLP decays. The simplest scenario contemplates a large branching ratio for 2-body LLP decays to two charged SM particles -for instance + − , π + π − or K + K − . In many well-motivated benchmark scenarios (see Sect. 2), however, the LLP may decay to various final states involving missing energy, photons, or high multiplicity, softer final states. In any experimental environment, these more complex decay morphologies can be much more challenging to detect or reconstruct: Reconstructing missing energy final states requires the ability to measure track momenta; detecting photons requires a calorimeter element or preshower component; identifying high mul-tiplicity final states requires the suppression of soft hadronic backgrounds. The CODEX-b baseline concept, as described below in Sect. 1.3, is well-suited to reconstruct several of these morphologies, in addition to the simple 2-body decays. Extensions of the baseline design may permit some calorimetry or pre-shower capabilities, which would enable the reconstruction of photons and other neutral hadrons.

Baseline detector concept
The proposed CODEX-b location is in the UX85 cavern, roughly 25 meters from the interaction point 8 (IP8), with a nominal fiducial volume of 10 m × 10 m × 10 m (see Fig. 3a) [8]. Specifically, the fiducial volume is defined by 26 m < x < 36 m, −7 m < y < 3 m and 5 m < z < 15 m, where the z direction is aligned along the beam line and the origin of the coordinate system is taken to be the interaction point. This location roughly corresponds to the pseudorapidity range 0.13 < η < 0.54. Passive shielding is partially provided by the existing UXA wall, while the remainder is achieved by a combination of active vetos and passive shielding located nearer to the IP. A detailed description of the backgrounds and the required amount of shielding can be found in Sect. 3.
The actual reach of any LLP detector will be tempered by various efficiencies, including efficiencies for tracking and vertex reconstruction. In particular, no magnetic field will be available in the CODEX-b fiducial volume. To design an LLP detection program, rather than only an exclusionary one, it is therefore important to be able to confirm the presence of exotic physics and reject possibly mis-modeled backgrounds. This requires capabilities for particle identification, mass reconstruction and/or event reconstruction.
To address these considerations, several detector concepts are being considered. The baseline CODEX-b conceptual design makes use of Resistive Plate Chambers (RPC) tracking stations with O(100) ps timing resolution. A hermetic detector, with respect to the LLP decay vertex, is needed to achieve good signal efficiency and background rejection. In the baseline design, this is achieved by placing six RPC layers on each surface of the detector. To ensure good vertex resolution five additional triplets of RPC layers are placed equally spaced along the depth of the detector, as shown in Fig. 3b. Other, more ambitious options are being considered, that use both RPCs as well as large scale calorimeter technologies such as liquid [10] or plastic scintillators, used in accelerator neutrino experiments such as NOνA [11], T2K upgrade [12] or Dune [13]. If deemed feasible, implementing one of these options would permit measurement of decay modes involving neutral final states, improved particle identification and more efficient background rejection techniques.
Because the baseline CODEX-b concept makes use of proven and well-understood technologies for tracking and  , overlaid with the CODEX-b volume, as reproduced from Ref. [8]. Right: Schematic representation of the proposed detector geometry precision timing resolution, any estimation or simulation of the net reconstruction efficiencies is expected to be reliable. These estimates must be ultimately validated by data-driven determinations from a demonstrator detector, which we call CODEX-β (see Sect. 4). Combined together, the baseline tracking and timing capabilities will permit mass reconstruction and particle identification for some benchmark scenarios.
The transverse location of the detector permits reliable background simulations based on well-measured SM transverse production cross-sections. The SM particle propagation through matter -necessary to simulate the response of the UXA radiation wall and the additional passive and active shielding -is also well understood for the typical particle energies generated in that pseudorapidity range. The proposed location behind the UXA radiation wall will also permit regular maintenance of the experiment, e.g. during technical or other stops. In addition to background simulations, the active veto and the ability to vary the amount of shielding over the detector acceptance permit LLP measurements or exclusions to be determined with respect to data-driven baseline measurements or calibrations of relevant backgrounds (see Sect. 3).

Search power, complementarities and unique features
Although ATLAS, CMS and LHCb were not explicitly designed with LLP searches in mind, they have been remarkably effective at probing a large region of the LLP parameter space (see [7,14] for recent reviews). The main vari-ables which provide the necessary discrimination for triggering and off-line background rejection are often the amount of energy deposited and/or the number of tracks connected to the displaced vertex. In most searches, the signal efficiency therefore drops dramatically for low mass LLPs, especially when they are not highly energetic (e.g. from Higgs decays.) For instance, the penetration of hadrons into the ATLAS or CMS muon systems, combined with a reduced trigger efficiency, attenuates the LHC reach for light LLPs, m LLP 10 GeV, decaying in the muon systems.
Beam dump experiments such as SHiP [15][16][17], NA62 [18] in beam-dump mode, as well as forward experiments like FASER [19][20][21] evade this problem by employing passive and/or active shielding to fully attenuate the SM backgrounds. The LLPs are moreover boosted in a relatively narrow cone, and very high beam luminosities can be attained. This results in excellent reach for light LLPs that are predominantly produced at relatively low center-of-mass energy, such as a kinetically mixed dark photon. The main trade-off in this approach is, however, the limited partonic center-ofmass energy, which severely limits their sensitivity to heavier LLPs or LLPs primarily produced through heavy portals (e.g. Higgs decays).
Finally, proposals in pursuit of shielded, transverse, background-free detectors such as MATHUSLA [22,23], CODEX-b [8] and AL3X [24] aim to operate at relatively low pseudorapidity η, but with far greater shielding compared to the ATLAS and CMS muon systems. This removes the background rejection and triggering challenges even for low mass LLPs, m LLP 10 GeV, though at the expense of a reduced geometric acceptance and/or reduced luminosity. Because of their location transverse from the beamline, they can access processes for which a high parton center-of-mass energy is needed, such as Higgs and Z production. In this light, the regimes for which existing and proposed experiments have the most effective coverage can be roughly summarized as follows:   in the long lifetime regime (1 cτ 10 7 m), and high √ŝ production channels.
In Fig. 4 we provide a visual schematic summarizing these LLP coverages, showing slices in the space of LLP mass, lifetime, and √ŝ , that provide a sketch of the complementarity and unique features of various LLP search strategies and proposals. Relative to the existing LHC detectors, CODEX-b will be able to probe unique regimes of parameter space over a large range of well motivated models and portals, explored further in the Physics Case in Sect. 2. A more extensive discussion and evaluation of the landscape of LLP experimental proposals can be found in Refs. [7] and [25].
While the ambitiously sized 'transverse' detector proposals such as MATHUSLA and AL3X would explore even larger ranges of the parameter space, the more manageable and modest size of CODEX-b provides a substantially lower cost alternative with good LLP sensitivity. It also allows for the possibility of additional detector subsystems, such as precision tracking and calorimetry. Furthermore, the proximity of CODEX-b to the LHCb interaction point (IP8) and LHCb's trigger-less readout (based on standardized and readily available technologies) makes it straightforward to integrate the detector into the LHCb readout for triggering and/or partial event reconstruction. This capability is not available to any other proposed LLP experiment at the LHC interaction points, and may prove crucial to authenticate any signals seen by CODEX-b. For a further discussion of the experimental design drivers and preliminary case studies of how different 2 The degree to which each of these detectors can compete with ATLAS and CMS in the high mass regime, m LLP 10 GeV, depends on their angular acceptance and integrated luminosity. The larger volume MATHUSLA and AL3X proposals therefore typically remain more competitive with the main detectors for higher LLP masses than CODEX-b. detector capabilities can effect the sensitivity for different models, we refer to Sect. 5.

Timeline
The CODEX-β demonstrator detector is proposed for Run 3 and is therefore complementary in time to the other funded proposals such as FASER. In contrast, the full version of CODEX-b, as well as FASER2, SHiP, MATHUSLA, and AL3X are all proposed to operate in Runs 4 or 5 during the HL-LHC. We show the nominal timeline for CODEX-β and CODEX-b in Fig. 5. Results as well as design and construction lessons from CODEX-β are expected to inform the final design choices for the full detector, and may also inform the evolution of the schedule shown in Fig. 5. The modest size of CODEX-b, the accessibility of the DELPHI cavern, and the use of proven technologies in the baseline design, is expected to imply not only lower construction and maintenance costs but also a relatively short construction timescale. It should be emphasized that CODEX-b may provide complementary data both in reach and in time, at relatively low cost, to potential discoveries in other more ambitious proposals, should they be built, as well as to existing LHC experiments.

Theory survey strategies
Long-lived particles occur generically in theories with a hierarchy of mass scales and/or couplings (see Sect. 1.1), such as the Standard Model and many of its possible extensions. This raises the question how best to survey the reach of any new or existing experiment in the theory landscape. Given the vast range of possibilities, injecting some amount of "theory prejudice" cannot be avoided. We therefore consider two complementary strategies to survey the theory space: (i) studying minimal models or "portals", where one extends the Standard Model with a single new particle that is inert under all SM gauge interactions. The set of minimal modes satisfying this criteria is both predictive and relatively small -we restrict ourselves to the set of minimal models generating operators of dimension 4 or lower, as well as the well-motivated dimension 5 operators for axion-like particles. It is important to keep in mind, however, that minimal models are merely simplified models, meant to parametrize different classes of phenomenological features that may arise in more complete models. To mitigate this deficiency to some extent, we then also consider: (ii) studying a number of complete models, which are more elaborate but aim to address one or more of the outstanding problems of the Standard Model, such as the gauge hierarchy problem, the mechanism of baryogenesis, or the nature of dark matter. These complete models  which contains LLPs produced through an exotic Z decay, and has not been studied previously.

Minimal models
The underlying philosophy of the minimal model approach is the fact that the symmetries of the SM already strongly restrict the portals through which a new, neutral state can interact with our sector. The minimal models can then be classified via whether the new particle is a scalar (S), pseudoscalar (a), a fermion (N ) or a vector (A ). In each case there are a only a few operators of dimension 4 or lower (dimension 5 for the pseudo-scalar) which are allowed by gauge invariance. The most common nomenclature of the minimal models and their corresponding operators are Abelian hidden sector: where F μν is the field strength operator corresponding to a U (1) gauge field A , H is the SM Higgs doublet, and h the physical, SM Higgs boson. 3 Where applicable, we consider cases in which a different operator is responsible for the production and decay of the LLP, as summarized in Fig. 6. Note that the h A μ A μ and S 2 H † H operators respect a Z 2 symmetry for the new fields and will not induce a decay for the LLP on their own. For the axion portal, the ALP can couple independently to the SU (2) and U (1) gauge bosons. In the infrared, only the linear combination corresponding a FF survives, though the coupling to the massive electroweak bosons can contribute to certain production modes. Moreover, the gauge operators mix into the fermionic operators through renormalization group running. Classifying the models according to production and decay portals obscures this key point 4 , and we have therefore chosen to present the model space for the ALPs in Fig. 6 in terms of UV operators. Once the UV boundary condition at a scale is given, such a choice fully specifies both the ALP production and the decay modes, which often proceed via a combination of the listed operators.

Abelian hidden sector
The Abelian hidden sector model [26][27][28] is a simple extension of the Standard Model, consisting of an additional, massive U (1) gauge boson (A ) and its corresponding Higgs boson (H ) (see e.g. [29][30][31][32][33][34][35] for an incomplete list of References containing other models with similar phenomenology). The A and the H can mix with respectively the SM photon [36,37] and Higgs boson, each of which provide a portal into this new sector. In the limit where the H is heavier than the SM Higgs, it effectively decouples from the phenomenology, such that only the operators in (1a) remain in the low energy effective theory.
The mixing of the A with the photon through the F μν F μν operator can be rewritten as a (millicharged) coupling of the A to all charged SM fermions. In the limit that the h A μ A μ coupling is negligible (along with higher dimension operators, such as h F μν F μν ), the mixing with the photon alone 3 The second operator in Eq. (1a) is strictly speaking not gauge invariant, but can be trivially generated by the kinetic term of a heavy dark scalar charged under the U (1) that acquires a vacuum expectation value (VEV) and mixes with the SM Higgs (see e.g. [26][27][28]). 4 An example of when such identification is not as straightforward may be provided by the case of ALP coupled to photons, where the main production mechanism relevant for CODEX-b is via an effective ALP coupling to quarks.
can induce both the production and decay of the A in a correlated manner, which has been studied in great detail (see e.g. [25] and references therein). CODEX-b has no sensitivity to this scenario, because the large couplings required for sufficient production cross-sections imply an A lifetime that is too short for any A s to reach the detector. However, the LHCb VELO and various forward detectors are already expected to greatly improve the reach for this scenario [19,[38][39][40][41][42][43][44][45].
The h A μ A μ operator, by contrast, is controlled by the mixing of the H with the SM Higgs. This can arise from the kinetic term D μ H 2 , with H = 0 and H − H mixing.
This induces the exotic Higgs decay h → A A . In the limit where the mixing with the photon is small, this becomes the dominant production mode for the A , which then decays through the kinetic mixing portal to SM states. CODEX-b would have good sensitivity to this mixing due to its transverse location, with high √ŝ . Importantly, the coupling to the Higgs and the mixing with the photon are independent parameters, so that the lifetime of the A and the h → A A branching ratio are themselves independent, and therefore convenient variables to parameterize the model. Figure 7 shows the reach of CODEX-b for two different values of the A mass, as done in Ref. [8] (see commentary therein), as well as the reach of AL3X [24] and MATHUSLA [22].
For ATLAS and CMS, the muon spectrometers have the largest fiducial acceptance as well as the most shielding, thanks to the hadronic calorimeters. The projected ATLAS reach for 3 ab −1 was taken from Ref. [46] for the low mass benchmark. In Ref. [47] searches for one displaced vertex (1DV) and two displaced vertices (2DV) were performed with 36.1 fb −1 of 13 TeV data. We use these results to extrapolate the reach of ATLAS for the high mass benchmark to the full HL-LHC dataset, where the widths of the bands corresponds to a range between a 'conservative' and 'optimistic' extrapolation for each of the 1DV and 2DV searches. Concretely, the 1DV search in Ref. [47] is currently background limited, with comparable systematic and statistical uncertainties. For our optimistic 1DV extrapolation we assume that the background scales linearly with the luminosity and that the systematic uncertainties can be made negligible with further analysis improvements. This corresponds to a rescaling of the current expected limit with 36.1 fb −1 /3000 fb −1 . For our conservative 1DV extrapolation we assume the systematic uncertainties remain the same, with negligible statistical uncertainties. This corresponds to an improvement of the current expected limit with roughly a factor of ∼ 2. The 2DV search in Ref. [47] currently has an expected background of 0.027 events, which implies ∼ 3 expected background events, if the background is assumed to scale linearly with the luminosity. For our optimistic 2DV extrapolation we assume the search remains background free, which corresponds to a rescaling of the current expected limits with  Reach for h → A A , as computed in Ref. [8]. Shaded bands refer to the optimistic and conservative estimates of the ATLAS sensitivity [46,47] for 3 ab −1 , as explained in the text. The horizontal dashed line represents the estimated HL-LHC limit on the invisible branching fraction of the Higgs [48]. The MATHUSLA reach is shown for its 200 m × 200 m configuration with 3 ab −1 ; for AL3X 100 fb −1 of integrated luminosity was assumed 36.1 fb −1 /3000 fb −1 . For the conservative 2DV extrapolation we assume 10 expected and observed background events, leading to a slightly weaker limit than with the background free assumption.
Upon rescaling cτ to account for difference in boost distributions, the maximum CODEX-b reach is largely insensitive to the mass of the A , modulo minor differences in reconstruction efficiency for highly boosted particles (see Sect. 5.1). This is not the case for ATLAS and CMS, where higher masses generate more activity in the muon spectrometer, which helps greatly with reducing the SM backgrounds.

Scalar-Higgs portal
The most minimal extension of the SM consists of adding a single, real scalar degree of freedom (S). Gauge invariance restricts the Lagrangian to where the ellipsis denotes higher dimensional operators, assumed to be suppressed. This minimal model is often referred to as simply the "Higgs portal" in the literature, though the precise meaning of the latter can vary depending on the context. LHCb has already been shown to have sensitivity to this model [49,50], and CODEX-b would greatly extend its sensitivity into the small coupling/long lifetime regime.
The parameter A S can be exchanged for the mixing angle, sin θ , of the S with the physical Higgs boson eigenstate. In the mass eigenbasis, the new light scalar therefore inherits all the couplings of the SM model Higgs: Mass hierarchical couplings with all the SM fermions, as well couplings to photons and gluons at one loop. All such couplings are suppressed by the small parameter sin θ . The couplings induced by Higgs mixing are responsible not only for the decay of S [51,52,[52][53][54][55], but also contribute to its production crosssection. Concretely, for m K < m S < m B , the dominant production mode is via the b → s penguin in Fig. 8a [56][57][58], because S couples most strongly to the virtual top quark in the loop. If the quartic coupling λ is non-zero, the rate is supplemented by a penguin with an off-shell Higgs boson, shown in Fig. 8b [59], as well as direct Higgs decays, shown in Fig. 8c.
In Fig. 9 we show the reach of CODEX-b taking two choices of λ, following [25]: (i) λ = 0, corresponding to (a) (b) (c) Fig. 8 Diagrams responsible for S production in a minimal extended Higgs sector. a is proportional to the mixing between S and Higgs, sin 2 θ, while b and c are proportional to the square of the quartic coupling, λ 2 the most conservative scenario, in which the production rate is smallest; (ii) λ = 1.6 × 10 −3 was chosen such that the Br[h → SS] = 0.01. 5 The latter roughly corresponds to the future reach for the branching ratio of the Higgs to invisible states. In this sense it is the most optimistic scenario that would not be probed already by ATLAS and CMS. The reach for other choices of λ therefore interpolates between Fig. 8a and Fig. 8b. Also shown are the limits from LHCb [49,50] and CHARM [60], and projections for MATHUSLA [61], FASER2 [62], SHiP [63], AL3X [24] and LHCb, where for the latter we extrapolated the limits from [49,50], assuming (optimistically) that the large lifetime signal region remains background free with the HL-LHC dataset. The scalar-Higgs portal is, by virtue of its minimality, very constraining as a model. When studying LLPs produced in B decays, it is therefore worthwhile to relax its assumptions, in particular relaxing the full correlation between the lifetime and the production rate -the b → sS branching ratio -as is the case in a number of non-Minimally Flavor Violating (MFV) models (see e.g. [64][65][66][67]). Fig. 10 shows the CODEX-b reach in the b → sS branching ratio for a number of benchmark LLP mass points, as done in Ref. [8]. The LHCb reach and exclusions are taken and extrapolated from Refs. [49,50], assuming 30% (10%) branching ratio of S → μμ for the 0.5 GeV (1 GeV) benchmark (see Ref. [8]). Also shown are the current and projected limits for B → K ( * ) νν [68,69]. A crucial difference compared to LHCb is that the CODEX-b reach depends only on the total branching ratio to charged tracks, rather than on the branching ratio to muons.
Interestingly, the CODEX-β detector proposed for Run 3 (see Sect. 4) may already have novel sensitivity to the b → sS branching ratio, as shown in Fig. 29. This reach is estimated under the requirement that the number of tracks in the final state is at least four, in order to control relevant backgrounds (see Sect. 3). A more detailed discussion of this reach is reserved for Sect. 4. 5 In the specific context of this minimal model, this size of quartic implies that m S is rather severely fine-tuned for m S 10 GeV.

Axion-like particles
Axion-like particles (ALPs) are pseudoscalar particles coupled to the SM through dimension-5 operators. They arise in a variety of BSM models and when associated with the breaking of approximate Peccei-Quinn-like symmetries they tend to be light. Furthermore, their (highly) suppressed dimension-5 couplings naturally renders them excellent candidates for LLP searches. The Lagrangian for an ALP, a, can be parameterized as [70] L ⊃ whereG μν = 1/2 μνρσ G ρσ . The couplings to fermions do not have to be aligned in flavor space with the SM Yukawas, leading to interesting flavor violating effects. The gauge operators mix into the fermionic ones at 1-loop, and therefore in choosing a benchmark model one needs to specify the values of these couplings as a UV boundary condition at a scale . In the following we will focus on the same benchmark models chosen in the Physics Beyond Colliders (PBC) community study [25] based on the ALP coupling to photons ("BC9", defined as c W + c B = 0), universally to quarks and leptons ("BC10", c i j q = c δ i j , c i j = c δ i j , c = 0) and to gluons ("BC11", c G = 0). Another interesting benchmark to consider is the so-called photophobic ALP [71], in which the ALP only couples to the SU (2) × U (1) gauge bosons such that it is decoupled from the photons in the UV and has highly suppressed photon couplings in the IR. CODEX-b is expected to have a potentially interesting reach for all these cases. This is true with the nominal design provided the ALP has a sizable branching fraction into visible final states, while for ALPs decaying to photons one would require a calorimeter element, as discussed below in Sect. 5.3. In this section, we will present the updated reach plots for BC10 and BC11 and leave the ALP with photon couplings (BC9) and the photophobic case for future study. ALPs coupled to quark and gluons can be copiously produced at the LHC even though their couplings are suppressed enough to induce macroscopic decay lengths. They therefore provide an excellent target for LLP experiments such as CODEX-b. Based on the fragmentation of partons to hadrons in LHC collisions, we can divide the ALP production into four different mechanisms: 1. radiation during partonic shower evolution (using the direct ALP couplings to quarks and/or gluons), 2. production during hadronization of quarks and gluons via mixing with (J PC =)0 −+q q operators (dominated at low ALP masses via mixing with π 0 , η, η ), 3. production in hadron decays via mixing with neutral pseudoscalar mesons, and 4. production in flavor-changing neutral current bottom and strange hadron decays, via loop-induced flavor-violating penguins.
The last mechanism has been already considered extensively in the literature. The ALP production probability scales parametrically as (m t / ) 2 and is proportional to the number of strange or b-hadrons produced. In general, the population of ALPs produced by this mechanism is not very boosted at low pseudorapidities. For the PBC study, it was the only production mechanism considered for BC10, and it was included in BC11.
The second and third mechanism are related as they both incorporate how the ALP couples to low energy QCD degrees of freedom. Conventionally the problem is rephrased into ALP mixing with neutral pseudoscalar mesons. This production is parametrically suppressed by ( f π / ) 2 and it quickly dies off for ALP masses much above 1 GeV. The population of ALPs produced by these mechanisms is not very boosted at low pseudorapidities, while the forward experiments will have access to very energetic ALPs. Compared to the PBC study, we treat separately the two cases of hadronization and hadron decays as they give rise to populations of ALPs with different energy distributions, and include them both in BC10 and BC11.
Finally, the first mechanism listed above has been so far overlooked in the literature. However, emission in the parton shower can be the most important production mechanism at transverse LHC experiments such as CODEX-b. Emission of (pseudo)scalars is expected to exhibit neither collinear nor soft enhancements, such that ALPs emitted in the shower may then carry an O(1) fraction of the parent energy and can be emitted at large angles.
For the case of quark-coupled ALPs (BC10), emission in the parton shower is suppressed by the quark mass -a consequence of the soft pion theorem -i.e. by m 2 q / 2 (or by loop factors to the induced gluon coupling, as below). The shower contribution may nevertheless still dominate at high ALP masses, where the other production mechanisms are forbidden by phase space or kinematically suppressed. For gluon-coupled ALPs (BC11), however, no such suppression arises in the shower. In a parton shower approximation, the ALP emission is attributed to a single parton with a given probability: While interference terms between ALP emissions from adjacent legs -e.g. in g → gga -cannot be neglected, such an approximation still captures the bulk of the production, even when the ALP is not emitted in the soft limit.
The parton shower approximation greatly simplifies the description of ALP emission, allowing the implementation in existing Monte Carlo tools. In this approximation, the probability for a parton to fragment into an ALP scales parametrically as Q 2 / 2 , with Q of the order of the virtuality of the parent parton. For example, the g → ga splitting function where t = Q 2 . While the population of partons with large energies is much smaller than the final number of hadrons, the production rate is enhanced by a large O(Q 2 / f 2 π ) factor, compared to the second and third mechanisms. In LHC collisions this is sufficient to produce a large population of energetic ALPs at low pseudorapidities, with boosts exceeding 10 3 for ALP masses in the 0.1-1 GeV range. The CODEX-b reach can therefore be extended to higher ALP masses and larger couplings compared to previous estimates if very collimated LLP decays can be detected.
We estimate these production mechanisms using Pythia 8, with the code modified to account for the production of ALPs during hadronization. We include ALP production in decays by extending its decay table in such a way that for each decay mode containing a π 0 , η, η meson in the final state, we add another entry with the meson substituted by the ALP. The branching ratio is rescaled by the ALP mixing factor and phase space differences.
The ALP production from the shower is computed by navigating through the generated QCD shower history and for each applicable parton an ALP is generated by re-decaying that parton with a weight: The ratio of the ALP branching (integrated) probability over the total (SM+ALP) (integrated) probabilities. This is correct for time-like showers in the limit that the ALP branching probability is small, because in this limit the branching scale is still controlled by the SM Sudakov factor. This procedure is not applicable to space-like showers, without also incorporating information from parton distribution functions. Such space-like showers, however, provide only a sub-leading contribution to transverse production, i.e. at the low pseudorapidities for the CODEX-b acceptance, and we therefore neglect them. For forward experiments at the LHC, such as FASER2, we do not include any shower contribution in the reach estimates, since a more complete treatment is required in order to fully estimate ALP production at high pseudorapidities, and the effect is expected to be at most O (1). For the case of a fermion-coupled ALP we include both the emission from heavy quark lines, proportional to (m q c q / ) 2 , and from loop-induced coupling to gluons, taking c G = N f c q /32π 2 in Eq. (4) above [70], where N f is the number of flavors. Further details will be given in upcoming work.
The reach predictions are shown in Fig. 11 for a fermioncoupled ALP (BC10) and in Fig. 12 for an ALP coupled to gluons (BC11), for the case of the nominal -i.e. trackeronly -CODEX-b design. In this case, an ALP decaying only to neutral particles such as photons is invisible, and highly boosted ALPs may decay to merged tracks, such that the signature resembles more closely a single appearing track inside the detector volume. For such a signature, the CODEX-b baseline design is not background-free; we use the background estimates presented in Table 3 (see Sect. 3 below), corresponding to 50 events of background in the entire detector in 300 fb −1 . The CODEX-b reach with a calorimeter option (shown here as a dashed line) is further discussed in Sect. 5.3.
The MATHUSLA estimates and the CHARM exclusion in Fig. 12 have been recomputed, while all the other curves have been taken from [25], after rescaling them to the appropriate lifetime and branching ratio expressions used in our plots. For MATHUSLA we used the 200 m × 200 m configuration and assumed that a floor veto for upward going muons entering the decay volume is available with a rejection power of 10 5 . Based on the estimates of 10 7 upward going muons [23,72], we therefore used 100 events as the background for unresolved highly boosted ALPs.
For the case of the fermion-coupled ALP we have further improved the lifetime and branching ratio calculations compared to those used in Refs. [25,73], by including the partial widths of ALP into light QCD degrees of freedom, using the same procedure as in Ref. [74]. The result is shown in Fig. 13. In particular in the 1 m a 3 GeV range, for a given coupling the ALP lifetime is O(10) smaller than previ-  3), while the one on the right to the normalization used in the PBC study [25]. The baseline (tracker only) CODEX-b design is shown as solid, while the gain by a calorimeter option is shown as dashed. All the curves for the other experiments except MATHUSLA are taken from [25], after rescaling with to the different lifetime/branching ratio calculation used here. The MATHUSLA reach is based on our estimates, see text for details  3), while the one on the right to the normalization used in the PBC study [25]. The baseline (tracker only) CODEX-b design is shown as solid, while the gain by a calorimeter option is shown as dashed. See Fig. 34 for further information about how the CODEX-b reach changes with different detector designs. FASER2 and REDTop curves are taken from [25], after rescaling to the different lifetime/branching ratio calculation used here. The CHARM curve has been recomputed with the same assumptions used for the CODEX-b curve. The MATHUSLA reach is based on our estimates, see text for details ously assumed, and the decays are mostly to hadrons instead of muon pairs.

Heavy neutral leptons
Heavy neutral leptons (HNLs) may generically interact with the SM sector via the lepton Yukawa portal, mediated by the marginal operatorL iH N, or may feature in a range of simplified NP models coupled to the SM via various higher-dimensional operators. In the m N ∼ 0.1-10 GeV regime, that we consider below, these models can be motivated e.g. by explanations for the neutrino masses [75], the νMSM [76,77], dark matter theories [78], or by models designed to address various recent semileptonic anomalies [79][80][81].
UV completions of SM-HNL operators typically imply an active-sterile mixing ν = U j ν j + U N N , where ν j and N are mass eigenstates, and U is an extension of the PMNS neutrino mixing matrix to incorporate the active-sterile mixings U N . If |U N | are the dominant couplings of N to the SM and N has negligible partial width to any hidden sector, then the N decay width is electroweak suppressed, scaling as Because the mixing |U N | can be very small, N can then become long-lived. We assume hereafter for the sake of simplicity that N couples predominantly to only a single active neutrino flavor, i.e.
and refer to as the 'valence' lepton.
The width of the HNL can be expressed as where s = 1 (s = 2) for a Dirac (Majorana) HNL and the final state M corresponds to a single kinematically allowed (ground-state) meson. Specifically, M considers: charged pseudoscalars, π ± , K ± ; neutral pseudoscalars π 0 , η, η ; charged vectors, ρ ± , K * ± ; and neutral vectors, ρ 0 , ω, φ. For m N > 1.5 GeV, we switch from the exclusive meson final states to the inclusive decays widths i qq and ν i qq , which are disabled below 1.5 GeV. Expressions for each of the partial widths may be found in Ref. [82]; each is mediated by either the W or Z , generating long lifetimes for N once one requires U N 1. Apart from the 3ν, and some fraction of the ν M and νqq (e.g. νπ 0 π 0 ) decay modes, all the N decays involve two or more tracks, so that the decay vertex will be reconstructible in CODEX-b, up to O(1) reconstruction efficiencies. We model the branching ratio to multiple tracks by considering the decay products of the particles produced. Below 1.5 GeV, we consider the decay modes of the meson M to determine the frequency of having 2 or more charged tracks; above 1.5 GeV where νqq production is considered instead of exclusive single meson modes, we conservatively approximate the frequency of having two or more charged tracks as 2/3.
HNLs may be abundantly produced by leveraging the large bb and cc production cross-section times branching ratios into semileptonic final states. In particular, for 0.1 GeV m N 3 GeV, the dominant production modes are the typically fully inclusive c → s N and b → c N . In order to capture mass threshold effects, production from these heavy flavor semileptonic decays is estimated by considering a sum of exclusive modes. The hadronic form factors are treated as constants: An acceptable estimate for these purposes, as corrections are expected to be small, ∼ qcd /m c,b . In certain kinematic regimes, the on-shell (Drell-Yan) W ( * ) → N or Z ( * ) → ν N channels can become important, as can the two-body D s → N and B c → N decays (a prior study in Ref. [83] for HNLs at CODEX-b neglected the latter contributions). In our reach projections, we assume the production crosssection σ (bb) 500 μb and σ (cc) is taken to be 20 times larger, based on FONLL estimates [84,85]. The EW production cross-sections used are σ (W → ν) 20 nb and j σ (Z → ν j ν j ) 12 nb [86]. The σ (D s )/σ (D) production fraction is taken to be 10% [87,88], and we assume a production fraction σ (B c )/σ (B) 2 × 10 −3 [89,90].
In the case of the τ valence lepton, with m N < m τ , the HNL may be produced not only in association with the τ , but also as its daughter. For example, both b → cτ N and b → c(τ → N eν e )ν are comparable production channels. When kinematically allowed, we approximate this effect by including for the valence τ case an additional factor of 1+BR(τ → N + X )/ |U τ | 2 , where BR(τ → N + X ) is the HNL mass dependent BR of the tau into a valence τ HNL plus anything [82]. HNL production from Drell-Yan τ 's is also included, but typically sub-leading: The relevant production cross-section is estimated with MadGraph [91] to be σ (τ DY ) 37 nb.
The projected sensitivity of CODEX-b to HNLs in the single flavor mixing regime is shown in Fig. 14. The breakdown in terms of the individual production modes is shown in the left panels, while the right panels compare CODEX-b sensitivity versus constraints from prior experiments, including BEBC [92], PS191 [93], CHARM [94][95][96], JINR [97], and NuTeV [98], DELPHI [99], and ATLAS [100] (shown collectively by gray regions). Also included are projected reaches for other current or proposed experiments, including NA62 [101], DUNE [102], SHiP [17], FASER [103], and MATH-USLA [83]. We adopt the Dirac convention in all our reach projections; the corresponding reach for the Majorana case is typically almost identical, though relevant exclusions may change. 6

Complete models
The LLP search program at the LHC is extensive and rich. In the context of complete models, it has been driven so far primarily by searches for weak scale supersymmetry, along with searches for dark matter, mechanisms of baryogenesis, and hidden valley models. In this section, we review the part of the theory space relevant for CODEX-b, which is typically the most difficult to access with the existing experiments. A comprehensive overview of all known possible signatures is neither feasible nor necessary, the latter thanks to the inclu-

R-parity violating supersymmetry
The LHC has placed strong limits on supersymmetric particles in a plethora of different scenarios. The limits are especially strong if the colored superpartners are within the kinematic range of the collider. If this is not the case, the limits on the lightest neutralino (χ 0 1 ) are remarkably mild, especially if the lightest neutralino is mostly bino-like. In this caseχ 0 1 can still reside in the ∼ GeV mass range, and be arbitrarily separated from the lightest chargino. Such a light neutralino must be unstable to prevent it from overclosing the universe, which will happen if R-parity is violated [106]. Theχ 0 1 then decays through an off-shell sfermion coupling to SM particles through a potentially small R-parity violating coupling. The combination of these effects typically provide a macroscopicχ 0 1 proper lifetime. The sensitivity of CODEX-b to this scenario was recently studied forχ 0 1 production through exotic B and D decays [107], as well as from exotic Z 0 decays [83]. Dercks et al. [107] studied the interaction and considered five benchmarks, corresponding to different choices for the matrix λ i jk , each with a different phenomenology. We reproduce here their results for their benchmarks 1 and 4, and refer the reader to Ref. [107] for the remainder. The parameter choices, production modes and main decay modes are summarized in Table 1. The reach of CODEX-b is shown in Fig. 15. In both benchmarks, CODEX-b would probe more than 2 orders of magnitude in the coupling constants. For benchmark 4 the reach would be substantially increased if the detector is capable of detecting neutral final states by means of some calorimetry.
The above results assume the wino and higgsino multiplets are heavy enough to be decoupled from the phenomenology. This need not be the case. For instance, the current LHC bounds allow for a higgsino as light as ∼ 150 GeV [108], as long as the wino is kinematically inaccessible and the bino decays predominantly outside the detector. In this case, the mixing of the bino-likeχ 0 1 can be large enough to induce a substantial branching ratio for the Z →χ 0 1χ 0 1 process. Helo et. al. [83] showed that the reach of CODEX-b would exceed the Z → invisible bound for 0.1 GeV < mχ0 1 < m Z /2 and 10 −1 m < cτ < 10 6 m, as shown in Fig. 16. The reach is independent of the flavor structure of the RPV coupling(s), so long as the branching ratio to final states with at least two charged tracks is unsuppressed. It should be noted that the ATLAS searches in the muon chamber [46,47] are expected to have sensitivity to this scenario, although no recasted estimate is currently available. As with exotic Higgs decays in Sect. 2.3.1, the expectation is, however, that CODEX-b would substantially improve upon the ATLAS reach for low mχ 0 .

Relaxion models
Relaxion models rely on the cosmological evolution of a scalar field -the relaxion -to dynamically drive the weak scale towards an otherwise unnaturally low value [109]. The relaxion sector therefore must be in contact with the SM electroweak sector, and the implications of relaxion-Higgs mixing have been studied extensively [109][110][111][112][113]. The phenomenological constraints were mapped out in detail in Refs. [114,115] (see [52] for similar phenomenology in a model where the light scalar is identified with the inflaton). Following the discussion in Ref. [105], the phenomenologically relevant physics of the relaxion, φ, is contained in the term in which h is the real component of the SM Higgs field that obtains a vacuum expectation value v, is the cut-off scale of the effective theory, N is the scale of a confining hidden sector, f is the scale at which a UV U (1) symmetry is broken spontaneously, and finally, C and δ are real constants. After φ settles into its vacuum expectation value, φ 0 , Eq. (8) can be expanded in large φ 0 / f , such that with λ = C 3 N /v 2 . The model in Eq. (9) now directly maps onto the scalar-Higgs portal in Eq. (2) of Sect. 2.3.2. CODEX-b and other intensity and/or lifetime frontier experiments can then probe the model in the regime λ ∼ 1 and f ∼ TeV. The angle φ 0 / f + δ controls whether the mixing or quartic term is most important: On the one hand, if it is small, the lifetime of φ increases but the quartic in Eq. (9) can be sizable, enhancing the h → φφ branching ratio (Fig. 9b). On the other hand, for φ 0 / f + δ π/2 the quartic is negligible and the phenomenology is simply that of a scalar field mixing with the Higgs (Fig. 9a).

Neutral naturalness
The Abelian hidden sector model in Sect. 2.3.1 has enough free parameters to set the mass (m A ), the Higgs branching ratio (Br(h → A A )) and the width ( A ) independently. It therefore allows for a very general parametrization of the Table 1 Summary of two of the five benchmark models considered in Ref. [107] Coupling Production Decay products reach for exotic Higgs decays in terms of the lifetime, mass and production rate of the LLP. The downside of this generality is that the model has too many independent parameters to be very predictive. In many models, however, the lifetime has a very strong dependence on the mass, favoring long lifetimes for low mass states. We therefore provide a second, more constrained example where the lifetime is not a free parameter.
The example we choose is the fraternal twin Higgs [116], which is a recent incarnation of the Twin Higgs paradigm [117,118], which is designed to address the little hierarchy problem. It is itself an example of a hidden valley [119,120]. The model consists of a dark or "twin" sector containing an SU (2) × SU (3) gauge symmetry, that are counterparts of the SM weak and color gauge groups. It further contains a dark b-quark and a number of heavier states which are phenomenologically less relevant. The most relevant interactions are with H the SM Higgs doublet and H the dark sector Higgs doublet. The "twin quarks" q L , b R and t R are dark sector copies of the 3rd generation quarks.
The Higgs potential of this model has an accidental SU (4) symmetry, which protects the Higgs mass at one loop provided that y t ≈ y t , with y t the SM top Yukawa coupling. The corresponding top partner -the "twin top" -carries color charge under the twin sector's SU (3) rather than SM color, and is therefore not subject to existing collider constraints from searches for colored top partners. The accidental symmetry exchanging H ↔ H may further be softly broken, such that H = v and H ≈ f . The parameter f is typically expressed in terms of the mass of the twin top quark, m T , through the relation m T = y t f / √ 2. The existing constraints on the branching ratio of the SM Higgs already demand m T /m t 3 [121].
We consider the scenario in which the b mass is heavier than the dark SU(3) confinement scale, , such that the lightest state in the hadronic spectrum is the 0 ++ glueball [122,123], with a mass m 0 ≈ 6.8 . The 0 ++ glueball mixes with the SM Higgs boson through the operator where h is the physical Higgs boson and α 3 the twin QCD gauge coupling. After mapping the gluon operator to the low energy glueball field, this leads to a very suppressed decay width of the 0 ++ state, even for moderate values of m t /m T . In particular, the lifetime is a very strong function of the mass, and can be roughly parametrized as This is naturally in the range where displaced detectors like CODEX-b, AL3X and MATHUSLA are sensitive. The full lifetime curve is shown in the left hand panel of Fig. 17, where we have accounted for the running of α 3 , as in Ref. [116,124]. For simplicity we assume that the second Higgs is too heavy to be produced in large numbers at the LHC, as is typ-ical in composite UV completions. However, even in this pessimistic scenario the SM Higgs has a substantial branching ratio to the twin sector. Specifically, this Higgs has a branching ratio of roughly ∼ m 2 t /m 2 T for the h → b b channel. The b quarks subsequently form dark quarkonium states, which in turn can decay to lightest hadronic states in the hidden sector. While this branching ratio is large, the phenomenology of the dark quarkonium depends on the detailed spectrum of twin quarks (see e.g. Ref. [124]). There is however a smaller but more model-independent branching ratio of the SM Higgs directly to twin gluons, given by [125] Br with Br[h → gg] = 0.086. α s (m h ) and α s (m h ) are the strong couplings, respectively in the SM and twin sectors, evaluated at m h . The hidden glueball hadronization dynamics is not known from first principles, and we have assumed that the Higgs decays to the twin sector on average produces two 0 ++ glueballs. Especially at the rather low m 0 of interest for CODEX-b, this is likely a conservative approximation.
The projected reach of CODEX-b, MATHUSLA and ATLAS is shown in the right hand panel of Fig. 17. The projections for ATLAS were obtained as in Sect. 2.3.1. The high mass, short lifetime regime may be covered with new tagging algorithms for the identification of merged jets at LHCb [126,127]. We find that CODEX-b would significantly extend the reach of ATLAS for models of neutral naturalness. For hidden glueballs, the factor of ∼ 30 larger geometric acceptance times luminosity for MATHUSLA only results in roughly a factor of ∼ 2 more reach in m 0 for a fixed m T , because of the scaling in Eq. (12). For higher glueball masses, CODEX-b outperforms MATHUSLA due to it shorter baseline. However, this region will likely be covered by ATLAS.
In summary, this hidden glueball model serves to illustrate an important point: For light hidden sector states, the lifetime often grows as a strong power-law of its mass, as illustrated by Fig. 17. For ATLAS and CMS, this means that the standard background rejection strategy of requiring two vertices becomes extremely inefficient for such light hidden states. Instead, displaced detectors like CODEX-b, MATH-USLA and FASER are needed to cover the low mass part of the parameter space.

Inelastic dark matter
Berlin and Kling [128] have studied the reach for various (proposed) LLP experiments in the context of a simple model for inelastic dark matter [129,130]. The ingredients are two Weyl spinors with opposite charges under a dark, higgsed U (1) gauge interaction. In the low energy limit, the model where the second term indicates the mixing of the dark gauge boson with the SM photon. The ellipsis represents sub-leading terms which do not significantly contribute to the phenomenology. The pseudo-Dirac fermions χ 1 and χ 2 are naturally close in mass, which leads to a phase space suppression of the width of χ 2 . The fractional mass difference is parameterized by ≡ (m 2 − m 1 )/m 1 1. At the LHC, the production occurs through qq → A → χ 2 χ 1 , which is controlled by the mixing parameter . The decay width of χ 2 is given by where α D = e 2 D /4π is the dark gauge coupling. CODEX-b, MATHUSLA, FASER and the existing LHC experiments can search for the pair of soft, displaced fermions from the χ 2 decay. The expected sensitivity of the various experiments is shown in Fig. 18 for an example slice of the parameter space. In particular, CODEX-b will be able to probe a large fraction of the parameter space that produces the observed dark matter relic density, as indicated by the black line in Fig. 18. It is worth noting that for this model, the minimum energy threshold per track is an important parameter in determining the reach, which should inform the design of the detector. For more benchmark points and details regarding the cosmology, we refer to Ref. [128].

Dark matter coscattering
The process of coscattering [131,132] has been studied as a way to generate the correct relic DM abundance. Coscattering has a similar framework to coannihilating dark matter models: Both models contain at least one dark matter particle χ , a second state charged under the Z 2 of the dark sector, ψ, and a third particle X that allows the two particles to transition into one another via an interaction such as a Yukawa, y Xχψ. In many coannihilation scenarios ψψ ↔ X X (or SM) is an efficient annihilation mechanism, while χχ, χψ ↔ X X (or SM) is not. Throughout the coannihilation, the "coscattering" process ψ X ↔ χ X (or similar) remains efficient and allows the χ and ψ species to interchange, without changing the dark particle number. Eventually, ψψ ↔ X X freezes out, and the total dark particle number is fixed.
By contrast, one may consider coscattering DM [131], in which the ψ X ↔ χ X coscattering process drops out of equilibrium before the ψψ ↔ X X coannihilation process. This requires three ingredients: m X ∼ m ψ ∼ m χ ; a large ψψ ↔ X X cross-section; and a small ψ X ↔ χ X crosssection. As χ does not have any sizable interactions other than with ψ by assumption, there are no interactions beyond ψ X ↔ χ X that allow for χ to maintain a thermal distribution while it is in the process of decoupling from the thermal bath. This results in important non-thermal corrections that require tracking the full phase space density, rather than just the particle number n χ , in order to correctly evaluate the relic abundance [131].
The vector portal model we consider throughout the rest of this subsection is similar to the one in the Sect. 2.4.4. Here we introduce a new U(1) D gauge group with fairly strong couplings, a scalar charged under the U(1) D that obtains a VEV, a Dirac spinor χ 2 charged under the gauge group, and a second Dirac spinor χ 1 that is not. The Lagrangian for the model is The scalar VEV φ gives a mass to the dark vector and generates a small mixing between the U(1) D active χ 2 and sterile χ 1 . For simplicity, we set y ≡ y 12 = y 21 . When m ≡ m 2 − m 1 y φ a small mixing angle θ ≈ y φ / m is generated. We assume that m φ m Z D , so that then when y g D , the phenomenology is insensitive to the presence of the scalar.
The mixing of Z D with the Z boson allows for Z → χ 2χ2 with a branching ratio of The daughter χ 2 particles from the Z decay can propagate several meters before decaying to χ 1 through an off-shell dark photon, i.e. χ 2 → χ 1 f f . The ' f f ' indicates a pair of SM fermions, which CODEX-b can detect. The decay rate is dictated by the splitting between the two states. For example, the partial width to electrons, neglecting the electron mass From this expression we can approximate the lifetime as where BR(Z D → ee; m) is the branching ratio for a kinetically mixed dark vector of mass m into ee. This is done to approximate the inclusion of additional accessible final states, as splittings in this model are commonly O (GeV). While a more thorough treatment would integrate over phase space for each massive channel separately, this approximation captures the leading effect to well within the precision desired here. Additionally, χ 2 pairs can be directly produced through an off-shell Z D . Because the Z D is off-shell, this does not generate a large contribution unless m χ 2 10 GeV. This model provides a scenario containing an exotic Z decay into long-lived particles.
In Fig. 19 we show the projected sensitivity for CODEX-b (shaded) and MATHUSLA (dashed) [105] to the model setting = 10 −3 , m Z D = 0.6m χ 1 , and for two choices of α D = 1 and 4π (green and red, respectively). With these parameters fixed, the choice of sin θ fixes the mass splitting from the DM relic abundance criteria. At small masses, thē χ 2 χ 1 ↔ Z D Z D coannihilation process remains in equilibrium long enough to deplete the χ 1 number density below the relic abundance today. This region is illustrated by the dark hatched lines.

Dark matter from sterile coannihilation
D'Agnolo et. al. [133] have explored the mechanism of sterile coannihilation, for which the number density in the dark sector is set by the annihilation of states that are heavier than the dark matter. In this scenario the dark matter remains in chemical equilibrium with these heavy states until after their annihilation process freezes out, which naturally allows for much lighter dark matter than in standard thermal freeze-out models.
Concretely, the example model that is considered in Ref. [133] is given by where the parameter δm m ψ , m χ generates a small mixing between ψ and χ . For the choice m ψ m χ > m φ , the relic density of χ is effectively set by ψψ → φφ annihilations. Finally, φ is assumed to mix with the SM Higgs, and it is this coupling which keeps the dark sector in thermal equilibrium with the SM sector. For a summary of the direct detection and cosmological probes of this model, we refer to Ref. [133]. From a collider point of view, the most promising way to probe the model is to search for the scalar φ through its mixing with the Higgs. This scenario is identical to the scalar-Higgs portal model with λ = 0, which is discussed in Sect. 2.3.2. Fig. 20 shows the projected reach for CODEX-b, overlaid with the relevant constraints and projections from dark matter direct detection and CMB measurements.

Asymmetric dark matter
In many asymmetric dark matter models, the DM abundance mass is directly tied to the matter anti-matter asymmetry in the SM sector [134][135][136]. Therefore the generic expectation for the DM is to carry B − L quantum numbers and have a mass GeV. For this mechanism to operate, the DM sector interactions with the SM should be suppressed and both sectors communicate in the early universe through operators of the form where O X and O SM are operators consisting of dark sector and SM fields respectively, with X and SM their respective operator dimensions. In supersymmetric models of asymmetric dark matter [134], the simplest operators in the superpotential are of the form with X ≡x + θ x the chiral superfield containing the DM, denoted by x. The phenomenology of this scenario is very similar to that of RPV supersymmetry, with decay chains such asχ 0 →xu c d c d c (see Sect. 2.4.1). To accommodate the correct cosmology, macroscopic lifetimes cτ ∼ 10 m are typically required [135,137]. Moreover,x itself may or may not be stable, depending on the model. More generally, if the dark sector has additional symmetries and multiple states in the GeV mass range, as occurs naturally in hidden valley models with asymmetric dark matter (see e.g. Refs. [119,138]), these excited states often must decay to the DM plus some SM states. Such decays must necessarily occur through higher dimensional operators, and macroscopic lifetimes are therefore generic. As for previous portals, LLP searches in the GeV mass range are best suited to displaced, background-free detectors such as CODEX-b.

Other dark matter models
There are many other dark matter models that could provide signals observable with CODEX-b. Presenting projections for all possibilities is beyond the scope of this work, but here we briefly summarize many of the existing scenarios that can provide long-lived particles. Below we detail: SIMPs, ELDERs, co-decaying DM, dynamical DM, and freeze-in DM.
Strongly interacting massive particles (SIMPs) [139][140][141][142] obtain their relic density through a 3 → 2 annihilation process mediated by a strong, hidden sector force. The preferred mass scale in this scenario is the GeV scale, and the strong nature of the hidden sector implies the presence a whole spectrum of dark pions, dark vector mesons etc. The 3 → 2 annihilation process, however, heats up the dark sector in the early universe, which would drive the dark matter to be exponentially hotter than the SM if it were completely decoupled. Since the dark matter is known to be cold, this means there must exist a sufficiently strong portal keeping both sectors in thermal equilibrium. These interactions predict a variety of signatures, in conventional dark matter detection and at colliders. In this sense SIMP models provide more motivation for exploring the Hidden Valley framework: At the LHC, it is possible to produce the hidden quarks through the aforementioned portal (e.g. a kinetically mixed dark photon), which would subsequently shower and fragment to hidden mesons with masses around the GeV scale. Some of these mesons will be stable and invisible, such as the dark matter, while other will decay back to the standard model, often with macroscopic displacements. This phenomenology is studied in detail in Refs. [143,144].
ELastically DEcoupling Relics (ELDERs) [145] share many of the features of the SIMP models, including the strong 3 → 2 annihilation process in the hidden sector and the mandatory portal with the SM to prevent the dark sector from overheating. In contrast to SIMP models, the elastic scattering processes between the dark sector and the SM freeze out before the end of the 3 → 2 annihilations, such that the dark matter cannibalizes itself for some time during the evolution of the universe. ELDER models are also examples of hidden valleys, and the collider phenomenology is therefore qualitatively similar to that of SIMP models.
In co-decaying dark matter models [146][147][148], the dark matter state, χ 1 , is kept in equilibrium with a slightly heavier dark state, χ 2 , though efficient χ 1 χ 1 ↔ χ 2 χ 2 processes in the early universe, but the dark sector does not maintain thermal equilibrium with the SM. The χ 2 state is, however, unstable and decays back to the SM. Because both states remain in equilibrium, this also depletes the χ 1 number density once the temperature of the dark sector drops below the mass of χ 2 . For this mechanism to operate, χ 2 should have a macroscopic lifetime. On the one hand, the heavier χ 2 could very well be produced at the LHC through a heavy portal, however this is not strictly required for the co-decaying dark matter framework to operate. On the other hand, if implemented in the context of e.g. neutral naturalness [149,150], a production mechanism at the LHC is typically a prediction and the phenomenology is once again that of a hidden valley.
In dynamical dark matter models [151][152][153], the dark sector contains a large ensemble of decaying dark states with a wide range of lifetimes. Their collective abundance makes up the DM abundance we see today, by balancing their share in the universe's energy budget against their lifetime. Some of the states in the ensemble are expected to have lifetimes that can be resolved on collider length scales. Just as for co-decaying dark matter, an observable cross-section at the LHC is possible but not required. If the dynamical dark sector can be accessed, however, the collider phenomenology is rich [154][155][156] and auxiliary, displaced, background-free LLP detectors can play an important role [157].
Finally, in Freeze-in models [158], the dark matter is never in equilibrium with the SM sector, but instead the dark sector is slowly populated through either scattering or the decay of a heavy state. This mechanism demands very weak couplings, which in the case of freeze-in through decay predicts a long-lived state decaying to DM plus a number of SM states. In the models considered so far, the preferred mass range for the decaying state tends to be in the 100 GeV to 1 TeV regime [159][160][161][162], such that ATLAS and CMS ought to be well equipped to find these decays. Should the final states however prove to be difficult to resolve at ATLAS and CMS, or should the parent particle be lighter than currently predicted, CODEX-b could provide the means to probe these models.

Baryogenesis
There exists a wide range of models explaining the baryon asymmetry in the Universe, some of which reside in the deep UV, while others are tied to the weak scale, such as electroweak baryogenesis (see e.g. Ref. [163]) and WIMP baryogenesis [164,165]. The latter in particular predicts long-lived particles at LHC, with a phenomenology that is qualitatively similar to displaced decays for RPV supersymmetry (see Sect. 2.4.1). We refer to Ref. [105] for a discussion of the discovery potential of WIMP baryogenesis at the lifetime frontier.
Instead we focus here in more depth on a recent idea which generates the baryon asymmetry through the CP-violating oscillations of heavy flavor baryons [166,167] (see Refs. [66,67] for similar ideas involving heavy flavor mesons and Ref. [168] for a supersymmetric realization). This enables very low reheating temperatures and is moreover directly testable by experiments such as CODEX-b, as well as Belle II.
The model relies on the presence of the light Majorana fermions χ 1 and χ 2 which carry baryon number. (Two generations are needed to allow for CP-violation.) They couple to the SM quarks through The out-of-equilibrium condition necessary for baryogenesis can be satisfied, for instance, by a late decay of a third dark fermion to the SM heavy flavor quarks. The operator in Eq. (23) generates a dimension-9 operator with B = 2, of the form (u j d k d ) 2 , that is responsible for the baryon oscillations. For these oscillations to be sufficiently large to generate the observed baryon asymmetry, one needs m χ 1,2 m B , which has intriguing phenomenological consequences. Moreover, stringent constraints from the dinucleon decay of O 16 These low masses and long lifetimes are precisely where CODEX-b would have a substantial advantage over ATLAS, CMS and LHCb. The branching ratios for B baryons and mesons into χ 1,2 can even be as large as 10 −3 , which means that the rate of χ 1,2 production at IP8 could be very large. Part of the parameter space of this model might therefore be probed already by the CODEX-β during Run 3 (see Sect. 4.3).

Hidden valleys
Hidden Valley models [119] are hidden sectors with nontrivial dynamics, which can lead to a relatively large multiplicity of final states in decays of hidden particles. Confining hidden sectors provide a canonical example, because of their non-trivial spectrum of hidden sector hadrons and the "dark shower" that may arise when energy is injected in the hidden sector through a high energy portal. ( [120,138,[169][170][171][172][173], both in terms of the energy and angular distributions of the final states, as well as the lifetime of the dark sector particles. A handful of initial searches have already been performed at ATLAS, CMS, and LHCb (see e.g. Refs. [174][175][176][177]). The various opportunities afforded by the experiments, as well as the challenges involved in constructing a comprehensive search plan were recently summarized in Ref. [7]. In particular, • While the LLPs are typically in the ∼ GeV range, their lifetimes can easily take phenomenologically relevant values spanning many orders of magnitude. In the short lifetime regime, backgrounds can be suppressed by demanding multiple displaced vertices in the same event, provided that a suitable trigger can be found, but this strategy is much less effective in the long-lifetime regime. • There are generically multiple species of LLPs, with vastly different lifetimes, and some may decay (quasi-)promptly or to a high multiplicity of soft final states. In practice, this means that a displaced decay to SM final states from a dark shower is likely to fail traditional isolation criteria or p T thresholds, further complicating searches at ATLAS, CMS and LHCb. • The energy flow in the event may be non-standard, and is poorly understood theoretically. This means that standard jet-clustering algorithms are expected to fail for a large subclass of models.
Because of both its ability to search inclusively for LLP decays and its background-free setup, CODEX-b is not limited by many of these challenges, and would be sensitive to any hidden valley model which has at least one LLP species in the spectrum with both a sizable branching fraction to charged final states and a moderately large lifetime, i.e cτ 1 m. The latter requirement in particular makes CODEX-b highly complementary to ATLAS, CMS and LHCb in the context of these models, as the short lifetime regime can be probed with a multi-vertex strategy in the main detectors, provided that the putative trigger challenges can be addressed.

Backgrounds
Crucial to the CODEX-b programme is the creation and maintenance of a background-free environment. An in-depth discussion of relevant primary and secondary backgrounds may be found in Ref. [8] as well as Ref. [24].
In this section we re-examine the core features of the relevant backgrounds, and the required active and passive shielding required to ensure a background-free environment in the detector. This study includes an updated and more realistic Geant4 simulation of the shielding response that incorporates uncertainties, charged-neutral particle correlations, as well as an updated simulation of the high energy tails of the primary backgrounds, and simulation of multitrack production in the detector volume. Further, the details and results of a measurement campaign, conducted in the LHCb cavern in 2018 as a preliminary data-driven validation of these simulations, are presented.

Overview
An LHC interaction point produces a large flux of primary hadrons and leptons. Many of these may be fatal to a background-free environment either because they are themselves neutral long-lived particles, e.g. (anti)neutrons and K 0 L 's, that can enter the detector and then decay or scatter into tracks, or because they may generate such neutral LLP secondaries by scattering in material, e.g. muons, pions or even neutrinos. In the baseline CODEX-b design, LLP-like events are comprised of tracks originating within the detector volume, with the track momentum as low as 400 MeV. This threshold is conservative with respect to likely minimum tracking requirements for a signal (cf. Ref. [178]).
Suppression of primary hadron fluxes can be achieved with a sufficient amount of shielding material: roughly 10 14 neutrons and K 0 L 's are produced per 300 fb −1 at IP8, requiring log (10 14 ) 32λ of shield for full attenuation, where λ is a nuclear interaction length. In the nominal CODEX-b design, the 3 m of concrete in the UXA radiation wall, corresponding to 7λ, 7 is supplemented with an additional 25λ Pb shield, corresponding to about 4.5 m, as shown in Fig. 21. (We focus here on a shield comprised of lead, though composite shielding making use of e.g. tungsten might also be considered, with similar performance [24].) However, this large amount of shielding material in turn may act as a source of neutral LLP secondaries, produced by muons (or neutrinos, see Sect. 3.2.3) that stream through the shielding material. The most concerning neutral secondaries are those produced in the last few λ by high energy muons that themselves slow down and stop before reaching the detector veto layers. Such parent muons are not visible to the detector, while the daughter neutral secondaries, because they pass through only a few λ of shield, may themselves escape the leeward side of the shield and enter the detector volume: We call these 'stoppedparent secondaries'; a typical topology is shown in Fig. 21.
As a rough example, a 10 GeV muon has a 'CSDA' (Continuous Slowing-Down Approximation) range in lead of approximately 6 m [68], corresponding to 32λ (λ Pb 0.18 m). Approximately 10 9 such muons are produced per 300 fb −1 in the CODEX-b acceptance (see Sect. 3.2.1), and by the last few λ they have slowed to GeV kinetic energy. 7 The UXA wall is 3.2m in depth, corresponding to 7.5λ for 'standard' concrete [68]. Since the precise composition of the concrete used in the wall is not available we treat the wall as 3m of standard concrete as a conservative estimation.

Fig. 21
Cross-section of the shielding configuration of the Pb shield (gray), active shield veto (gold), and concrete UXA wall with respect to IP8 and the detector volume. Also shown are typical topologies for production of upstream and downstream stopped-parent secondaries, which are suppressed by passive shielding or rejected by the active shield veto, respectively The strange muoproduction cross-section for a GeV muon is ∼ 0.01 μb per nucleon, so that in the last λ approximately few × 10 3 K 0 L 's are produced by these muons. The kaon absorption cross-section on a Pb atom is ∼ 2 b, so the reabsorption probability in the last λ is ∼ 30%, with the result that ∼ 10 3 stopped-parent secondary K 0 L 's can still escape into the detector. This behavior is more properly modelled by a system of linear differential equations, that capture the interplay of the muon d E/dx with the energydependence of the secondary muoproduction cross-section and their (re)absorption cross-sections; in practice we simulate this with Geant4, as described below in Sect. 3.2.
The CODEX-b proposal resolves this secondary background problem by the addition of a veto layer placed deep inside the shield itself: an active shield element, shown in gold in Fig. 21. This veto layer may then trigger on the parent muons before they produce neutral secondaries and stop. The veto layer must be placed deep enough in the shield -shielded sufficiently from the IP -so that the efficiency required to veto the stopped-parent secondaries produced downstream is not too high. At the same time there must be sufficient shielding downstream from the veto to attenuate the stopped-parent secondaries with respect to the shield veto itself: That is, neutrals produced upstream before the veto layer that could still reach the detector (see Fig. 21). An additional consideration is that the veto rejection rate itself should be much smaller than the overall event rate, in order not to degrade the LLP signal detection efficiency. The nominal shield we consider has a so-called '(20 + 5)λ' configuration, with 20λ of Pb before the shield veto and 5λ afterwards, plus the additional 3 m of concrete (7λ) from the UXA wall.

Primary fluxes
Generation of the primary IP fluxes is achieved via simulation of the production of pions, neutral and charged kaons, (anti)muons, (anti)neutrons, (anti)protons and neutrino fluxes with Pythia 8 [179,180]. Included production channels span minimum bias (QCD), heavy flavor decays (HF), as well as Drell-Yan production (DY). Leptons produced from pion decay vertices inside a cylindrical radius r < 5 m and z < 2 m are included. We simulate weighted Pythia 8 events biasing the primary collisions inp T in order to achieve approximately flat statistical errors in log(ŝ) up to √ŝ of a few TeV. Under the same procedure we also combine soft and hard QCD processes with ap T cut of 20 GeV. A similar cut is used to define the HF sample. For the DY case we both standard 2 → 2 Drell-Yan processes and V + j, suitably combined to avoid double counting. In Fig. 22 we show all the relevant generated fluxes, broken down by production channel. In most cases, QCD production dominates, however, the HF and DY production can be important for high energy muon tails.

Simulated shield propagation
Particle propagation and production of secondary backgrounds inside the shield is simulated with Geant4 10.3 using the Shielding 2.1 physics list. The FTFP_BERT physics list is used to model high-energy interactions, based on the Fritiof [181][182][183][184] and Bertini intra-nuclear cascade [185][186][187] models and the standard electromagnetic physics package [188].
Propagating ∼ 10 14 -10 17 particles though the full shield is obviously computationally prohibitive. Instead, as in Refs. [8] and [24] we use a "particle-gun" on a shield subelement, typically either 5 or 2λ deep for Pb, and 7λ for concrete. The subelement geometry is chosen to be a conical section with the same opening angle as the CODEX-b geometric acceptance -approximately 23 • -in order to conservatively capture forward-propagating backgrounds after mild angular rescattering. The particle-gun input and output is binned logarithmically in kinetic energy, in 20 bins from E kin = 10 −1.6 GeV to 100 GeV and by particle species, including: γ , e ± , p ± , n ± , π ±,0 , K ± , K 0 S,L , μ ± , ν. Propagation of charged and neutral particles and anti-particles are treated separately for kaons, pions, neutrons and muons. For each particle-gun energy bin and species, 10 5 events are simulated; 10 7 events are simulated for muons and anti-muons to properly capture strange muoproduction of secondary K 0 L s. To also properly capture the 'CSDA' or slowing-down behavior of high energy muons when transiting through a large number of shield subelements, the particle-gun energy for muons was distributed uniformly in kinetic energy, within each bin.
Combining these results together one generates a "transfer matrix" between all incoming and outgoing backgrounds fluxes in the shield subelement for each chosen depth and material type. These transfer matrices may then be further composed together with the primary IP fluxes to obtain the attenuation and response of the full shield. Neutrino production of neutral hadrons occurs at a prohibitively small rate and is included separately. As muons may often generate problematic secondaries via forward scattering μ → X μ, an additional handle on the capability to veto neutral secondaries is obtained by keeping track of the presence of any associated charged particles in the particle-gun event, that may trigger relevant veto layers: a charged-neutral correlation. This 'correlation veto' is implemented by an additional binning in outgoing particle kinetic energy and the kinetic energy of the hardest charged particle in the event. This information is then used to generate an additional transfer matrix in which the outgoing particles are produced in association with a charged particle above a chosen kinetic energy threshold. We conservatively set the correlation veto threshold to be E kin > 0.6 GeV. At both the shield veto and detector we apply an additional suppression of neutral secondaries according to their charged-neutral correlation.
In order to incorporate statistical uncertainties in the Geant4 simulation, an array of 50 pseudo-datasets are generated by Poisson-distributing the statistics of each simulated particle-gun event. Thus in practice one obtains 50 separate transfer matrix compositions for the shield simulation, from which the statistics of overall shield performance and uncertainties may be extracted.
In Table 2 we show the results of this simulation for (20 + 5)λ shield configuration made of Pb, plus the 3 m concrete UXA wall, with a shield veto efficiency of 1 − ε veto = 10 −4 . For outgoing neutral particle fluxes in Table 2, a kinetic energy cut E kin > 0.4 GeV was applied, as required by minimum tracking requirements to produce at least one track. Table 2 includes the background fluxes rejected by the shield veto, both with and without application of the charged-neutral correlation veto in the detector. The ∼ 60 neutrons may each produce a single track scattering event along the 10 m depth of detector (see Sect. 3.2.5). The neutron incoherent scattering cross-section on air is ∼ 1 b, so that the probability of a neutron scattering on air into two tracks along the 10 m depth of detector is at most ∼ 5%. Requiring neutrons with E kin > 0.8 GeV for at least two tracks, results in a total neutron flux of ∼ 3 per 300 fb −1 , so that the net scattering rate to two or more tracks is < 1. The shield veto rejection rate for the (20 + 5)λ configuration is 2.2 kHz, assuming the projected instantaneous luminosity of 10 −34 cm −2 s −1 at IP8. (By comparison, for only 15λ of Pb before the veto, this would increase to 6.0 kHz.) This is dominated mainly Fig. 22 IP production cross-section per kinetic energy bin, for minimum bias (QCD), Heavy Flavor (HF), and Drell-Yan (DY) production channels by the incoming muon flux, and is far smaller than the total event rate, and therefore has a negligible effect on the detector efficiency.
One sees in Table 2 that the shield veto is crucial to achieve a zero background environment. Moreover, for a low, rather than zero, background environment, the shield veto data provides a data-driven means to calibrate the background simulation, from which any residual backgrounds in the detector can then be more reliably estimated and characterized.
In Fig. 23 we show the net background flux distributions in kinetic energy (blue) for a variety of neutral and charged species, including uncertainties, and without any E kin cuts. These may be compared to the background flux distributions of particles reaching the detector that are rejected by the shield veto (red, with 10 −4 scaling) and the IP fluxes (green, with 10 −12 scaling).

Neutrinos
An additional background may arise through production of neutral secondaries from neutrinos, that stream through the shield unimpeded. In particular, with a sufficiently high Table 2 Results from the Geant4 background simulation for (20 + 5)λ Pb shield, i.e. with an active shield veto at 20λ, applying a veto efficiency of 1 − ε veto = 10 −4 . For outgoing neutral particles a kinetic energy cut E kin > 0.4 GeV was applied as required by minimum tracking requirements, except for anti-neutrons in order to excludē n + N annihilation processes. We also show the rate for neutrons with E kin > 0.8 GeV, required for production of at least two tracks via scattering. For total luminosity L = 300fb −1 , the column "Net (E neutral kin > 0.4 GeV)" shows the net background particle yields after traversing the shield plus veto rejection, including veto correlations (denoted '±/0') between charged particles with E kin > 0.6 GeV and neutral particles. The column "Shield veto rejection (total)" shows the corresponding background particle yields entering the detector subject to the shield veto rejection alone, without applying the charged-neutral correlation veto. The column "Shield veto rejection (±/0 correlation)" shows the corresponding background particle yields entering the detector subject to the shield veto rejection, after application of the chargedneutral correlation veto on the detector front face. The final column lists the net background yield including detector rejection, scattering or decay probabilities .00) × 10 6 (1.04 ± 0.00) × 10 10 (1.04 ± 0.00) × 10 10 μ − (8.07 ± 0.01) × 10 5 (8.07 ± 0.01) × 10 9 (8.07 ± 0.01) × 10 9 neutrino flux,ν p → n quasi-elastic scattering may produce a non-negligible amount of neutrons in the last few λ that reach the detector volume, while the charged lepton is too soft or misses the acceptance. (The cross-section for the neutral current scattering νn → νn orνn →νn is approximately 10 times smaller than for the charged current process [189].) From Table 2 and Fig. 23, approximately 5 × 10 13 neutrinos are produced per 300 fb −1 at IP8 in the CODEX-b acceptance, with E ν > 0.4 GeV. The neutrino flux is approximately power-law suppressed by a quartic above E ν ∼ 1 GeV. Hence, although the charged current cross-section forν p → n is only ∼ 0.01(E ν /GeV) pb [189], the large flux of O(GeV) neutrinos streaming through the shield implies as many as ∼ 10 neutrons might be generated per λ of concrete in the UXA wall with E kin > 0.4 GeV. Neutral kaon production, such as νn → ν K 0 L , has a crosssection ∼ 0.1 fb for E ν ∼ 3.5 GeV [190], scaling approximately linearly with neutrino energy, and therefore may be safely neglected.
Composing the IP flux of anti-neutrinos in Fig. 22 with E ν > 0.4 GeV against the measured energy-dependent ν p → n cross-section [189], and including subsequent attenuation as characterized by the nuclear interaction length λ, one may estimate a conservative upper bound on the number of neutrons that might reach the detector with E kin > 0.4 GeV. From this procedure one finds that there are at most approximately 5 neutrino-produced neutrons of this type. Since this estimate is extremely conservative, neutron production from neutrinos is expected to be negligible compared to secondary neutrons from other primary fluxes.

Shielding marginal performance
As the detector tolerance of backgrounds may vary depending on the ultimately implemented detector technologies, it is instructive to assess the performance of the shield under variation of the shielding configuration, including variation of the total shield depth, L shield , and the placement and efficiency of the shield veto. We illustrate the marginal changes in shielding performance in terms of the total neutron and K 0 L fluxes for kinetic energy E kin > 0.4 GeV, by varying the shield configuration as combinations of the 5 or 2λ transfer matrices permit, and by permitting the veto efficiency to range from 1 − ε veto = 10 −5 up to 10 −2 . In the top panel of Fig. 24, the The simulated background fluxes are generally insensitive to marginal variation in the location of the shield veto, e.g. one sees that (19 + 4)λ performs similarly to the nominal (20 + 5)λ configuration. At very high efficiencies, i.e. 1 − ε veto < 10 −4 , both background fluxes are roughly expo-nentially distributed in shield depth. In this case the backgrounds are either unsuppressed primaries or stopped-parent secondaries produced upstream from the shield.
As the shield veto efficiency is reduced, however, one sees a departure from the exponential suppression: Contributions from stopped-parent secondaries produced downstream from the shield veto begin to dominate. For the K 0 L flux, this departure occurs only at 1 − ε veto > 10 −2 and at a larger L shield , compared to the neutrons. This arises because of a somewhat larger charged-neutral correlation for production of K 0 L 's: Their parent muons are typically somewhat hard and may reach the veto or detector.
One may assess the degree of this effect -effectively, the amount of non-stopped parent secondaries -by considering the case that the charged-neutral correlation veto is not applied (in practice, it may happen that the associated charged particles do not always trigger the veto). We show the corresponding shielding performance in the bottom panel of Fig. 24. For 1 − ε veto > 10 −3 , the background fluxes become substantially larger. One deduces that, especially for K 0 L s, charged parents that produce secondaries downstream of the shield may typically reach the detector. In Fig. 25 we show the neutron and K 0 L fluxes in the absence of a shield veto (1 − ε veto = 1), both with (light palette) and without (dark palette) the charged-neutral correlation veto in the detector (denoted '±/0'). One sees that while the charged-neutral correlation veto can significantly reduce the total fluxes, substantial net backgrounds for K 0 L s and neutrons remain.

Simulated track production
The simulated neutral background fluxes entering the detector may be folded against the probability of scattering into one or more tracks on material inside the detector, or against the probability of decays into one or more tracks in the detector interior: Nominally the number of two or more tracks should be < 1 to ensure a background-free environment. In Table 3 we show the rates of multitrack production for 1-9 tracks from scattering of the neutral fluxes on air in the 10 × 10 × 10 m 3 detector volume, for the nominal (20 + 5)λ Pb shield configuration with 1 − ε veto = 10 −4 . This production is simulated with Geant4 as in Sect. 3.2.2, requiring each track to have kinetic energy E kin > 0.4 GeV.
For total luminosity L = 300fb −1 , one sees that the total number of scatterings or decays into two or more tracks is 0.22 ± 0.03. This comports with our simulation and estimation of the background effective yields in Table 2.

Measurement campaign
To verify the background simulation, a data-driven calibration is needed, using data taken during collisions at IP8. This section serves as a brief summary of an initial measurement campaign that was undertaken during Run 2 operations at various locations in the UXA cavern, shielded only by the UXA wall, in August 2018. For a detailed description we refer to Ref. [191], which also features additional detailed background simulations, including the effects of infrastructure in the LHCb cavern.
The detector setup used scintillators, light-guides and photomultiplier tubes (PMT) from the HeRSCheL experiment Table 3 Multitrack production on air from the Geant4 background simulation for the (20 + 5)λ Pb shield, in the 10 × 10 × 10 m 3 detector volume for total luminosity L = 300fb −1 , requiring E kin > 0.4 GeV per track. Also shown are corresponding rates for total neutral and K 0 L multitrack production during Run 3 in the CODEX-β volume for total luminosity L = 15fb −1 (see Sect. 4)

Tracks
(20 + 5)λ Pb shield Run 3 (CODEX-β) Run 3 (CODEX-β) 1 − ε veto = 10 −4 K 0 L contribution  [192] at LHCb. The detector itself consisted of two parallel scintillator plates with surface area 300 × 300 mm 2 : Details can be found in Ref. [191]. Before transporting the setup to IP8 it was tested with cosmic rays, indicating an efficiency > 95% for minimum ionizing particles (MIPs), i.e. for those particles with kinetic energy 100 MeV. In the simulation of Ref. [191], it was further verified that no collision event produced more than one hit on the scintillator acceptance, such that pile up of hits within the trigger window can be ruled out. A two-fold coincidence between both scintillator plates was required to trigger the detector, and the trigger was not synchronized with the collisions. However, the coincidence time-window was 5 ns, much shorter than the 25 ns collision frequency at IP8, so that spill-over effects can be neglected. Two waveforms were recorded from each scintillator and the timestamp for all MIP hits. This timestamp was used to correlate the events with the beam status during data-taking.
The measurements were taken on the "D3 platform" level in the UXA hall, behind the concrete UXA wall. Figure 26 shows this platform, and the different locations and configurations used for the data-taking. The detector was deployed at three different positions on the passerelle between Data Acquisition (DAQ) server racks and the UXA wall, as well as at one location between the DELPHI barrel exhibit and the DAQ racks. The scintillator stand was oriented either parallel (' '), rotated 45 • or perpendicular ('⊥') to the beam line.
The measurement period spanned 17 days between 25th July and 10th August, 2018 with 52,036 recorded triggers observed during the run. The instantaneous luminosity at IP8 was stable during the measurement period. There was no beam until July 30th due to machine development and an inadvertent power loss. Figure 27 show the main results from the measurement campaign. The red data points represent the instantaneous luminosity measured by LHCb in Hz/nb. The green and blue data points indicate the hit rate in Hz, where the setup was alternated between the six different configurations/positions. The plots are shown in both linear and logarithmic scales. Table 4 contains the hit rate from ambient background without beam, with an average hit rate at each position and configuration of 2 mHz. The ambient background can therefore be considered negligible for this measurement. Table 5 shows the rate during stable beam. This rate is non-negligible, even for the small 300 × 300 mm 2 area of the scintillators. The rate increases from location P1 to P2 to P4, which, from Fig. 26, implies that there is more activity in the downstream region. This dependence on the η arises from additional concrete near IP8, which screens part of the CODEX-b acceptance, see Ref. [191]. Moreover, by comparing the rate at P2 with P5, behind the DAQ server racks, one can see that the racks are adding some amount of shielding material. As expected, the flux also depends on the orientation with respect to the beam direction, as indicated by the difference in rate between P5 and P6. In absolute numbers, the rate just behind the concrete wall is roughly 0.5 Hz over the 900 cm 2 scintillator area.
The predicted charged particle flux from the Geant4 simulation of Sect. 3.2 with just the concrete UXA wall acting as a shield, predicts a hit rate ∼ 10 Hz at position P2, assuming an instantaneous luminosity ∼ 0.4 Hz nb −1 , as in Fig. 27. This reduces to ∼ 5 Hz treating the full width of UXA wall as 7.5λ of standard concrete. This prediction will likely further receive O(1) reductions from: relaxing our conservative treatment of forward-propagating backgrounds under angular rescattering; accounting for the longer propagation path length through the wall at higher angles of incidence; variations or uncertainties in the simulation of the primary muon fluxes; and accounting for possibly additional material in the line-of-sight, such as concrete nearby the IP and the platform Fig. 26 The four measurement locations on the D3 level in the LHCb cavern, shown by red, orange, green and blue dots. The configurations are labelled P1-P6. Figure reproduced and modified from Ref. [191], as adapted from Ref. [8] Fig. 27 Hit rates during the run based on the six P1-P6 positions/configurations on a linear (top) and log (bottom) scale. Red data points denote the luminosity rate of LHCb, blue and green data points denote hit rates. Figure reproduced from Ref. [191] [191]. Nonetheless comparison to the measured 0.2 Hz rate demonstrates that the Geant4 simulation of Sect. 3.2 provides conservative estimates of the expected backgrounds.

CODEX-β
To validate the CODEX-b concept, a proposal has been developed for a small, 2 × 2 × 2 m 3 demonstrator detector -"CODEX-β" -which will be operational during Run 3. This detector will be placed in the proposed location for CODEX-b (UXA hall, sometimes referred to as the 'DEL-PHI cavern') shielded only by the existing, concrete UXA radiation wall.

Motivation
The main goals of the CODEX-β setup are enumerated as follows: (a) Demonstrate the ability to detect and reconstruct charged particles which penetrate into the UXA hall, as well as the decay products of neutral particles decaying within the UXA hall.
This is desirable to provide an accurate and fully datadriven estimate of the backgrounds, so that the design of the eventual shield (both passive and active, instrumented) needed by the full experiment can be optimized to be as small as possible. We have already made preliminary background measurements in the UXA hall using a pair of scintillators during Run 2 (see Sect. 3.3). However these measurements were simply of hit counts; we could not reconstruct particle trajectories. CODEX-β will allow us to track particles within a volume similar to the CODEX-b fiducial volume, and in particular separate charged particles produced outside the decay volume from backgrounds induced by particle scattering inside the decay volume itself.
As shown in Sect. 3, the residual neutron scattering inside the decay volume is one of the most important backgrounds identified in the original CODEX-b proposal [8]. The tracking capability of CODEX-β will also allow us to measure the origin of charged particle backgrounds, and in particular potential soft charged particles which could be swept towards CODEX-b by LHCb's magnet "focusing" and thereby evade the Pb shield.
(b) Detect and reconstruct a significant sample of neutral particles decaying inside the hermetic detector volume.
This will allow us to observe e.g. K 0 L decays and use this data to calibrate our detector simulation. Aside from measuring background levels, observing long-lived SM particles decaying inside the detector acceptance will allow us to calibrate the detector reconstruction and the RPC timing resolution. The most natural candidates are K 0 L mesons: In Fig. 28 we show the expected differential fluxes of neutrons, antineutrons, K 0 L s and (anti)muons, with respect to their kinetic energy, for an integrated luminosity of 15 fb −1 after propagation through the UXA wall. Also shown are the primary fluxes of the same species. In Table 3 we show the expected multitrack production from decay or scattering on air by neutral fluxes entering CODEX-β, requiring E kin > 0.4 GeV per track. We also show the multitrack contribution just from K 0 L s entering CODEX-β. One sees that approximately a few × 10 7 K 0 L decays to two tracks are expected in the CODEX-β volume per nominal year of data taking in Run 3. The results of the background simulation show that we will be able to reconstruct a variety of K 0 L decays in the CODEX-b demonstrator volume. The decay vertex and decay product trajectories moreover allow the boost to be reconstructed independently of the time-offlight information. Comparing the boost distribution of K 0 L mesons observed in CODEX-b, as well as the K 0 L mean decay time which can be inferred from this distribution, will allow us to calibrate and validate our detector simulation and reconstruction. The RPC readout is compatible with LHCb's data acquisition hardware. Some relatively straightforward firmware development will be required to enable LHCb's usual FPGA backend readout boards to receive the CODEX-b data. Based on expected data rates, we estimate that a single FPGA backend readout board will be comfortably able to read out the full CODEX-β detector. From an LHCb point of view the simplest solution would be that this board also clusters the RPCs and performs a basic track reconstruction, so that events which look interesting for CODEX-β can be kept for further inspection by LHCb's High Level Trigger simply by reading the CODEX-b data raw bank. Given that CODEX-b is about the same distance from the interaction point as LHCb's muon system, latency should not be an issue. Our background measurements in the cavern during Run 2 indicated hit (not track) rates of maximum 500 mHz across a scintillator area of order 10 −1 m 2 . Therefore, even a simple track reconstruction should allow all interesting events in CODEX-b to be kept for offline inspection. It will be desirable to have a possibility to read CODEX-b out during beam-off periods, for cosmic-ray data taking and calibration. CODEX-b will therefore ideally appear as a sub-detector within LHCb, though one whose presence/readiness is not required for nominal LHCb data taking.

Technical description and timetable
The high-level requirements listed above drive the design of CODEX-β to be a 2 × 2 × 2 m 3 cube. Each side of the cube will consist of 2 RPC panels, each of which is 2 × 1 m 2 in area. Each such panel block will contain a triplet of RPC layers. In addition there will be two panels of the same 2 × 1 m 2 area placed in the middle of the cube, for a total of (6+1)×2×3 = 42 such 2×1 m 2 RPC layers. CODEX-β is proposed for installation in the barrack which housed LHCb's Run 1 and Run 2 High Level Trigger farm, and which will be empty in Run 3 as the High Level Trigger will be housed in a dedicated data processing center on the surface. As a result CODEX-β will have ample space and straightforward access to all required detector services. The proposed detector technology for CODEX-β is that of the ATLAS RPCs for phase I upgrade, while the full CODEX-b detector would follow the phase II design (see [193] for technical descriptions). The timetable for installation is driven by the primary consideration to not interfere with the building or commissioning of the LHCb upgrade. For this reason, we originally proposed installation in winter 2021/2022, integration in the LHCb DAQ during spring 2022 and first data taking in summer 2022. Given the COVID pandemic and the subsequent delay of LHC Run 3 datataking to spring 2022, we have chosen to push back installation and all subsequent steps by one year. Although this will mean missing out on 2022 datataking, that is not so crucial for a demonstrator and avoids having to simultaneously commission CODEX-β and the upgraded LHCb detector. Given the modest size of CODEX-β and the use of well-understood detector components, we estimate around six months are needed to produce and qualify the RPCs. Therefore, if approved, it is realistic to complete the bulk of the construction during the first half of 2021. The mechanical support structure will build on existing structures used in ATLAS but modified to be modular. It will provide the required stability for a cubic arrangement of the detector layers; the design and construction of this structure is expected to take place in 2020-2021. The total cost of the detector components is expected to be roughly 150k e.

New physics reach
The acceptance of CODEX-β is roughly only 8×10 −3 times that of the full CODEX-b detector, and no shielding beyond the existing concrete wall will be in place. Its reach for BSM physics is therefore limited due to its reduced acceptance and high background environment. However, roughly 10 13 b-hadrons will be produced at IP8 during Run 3. This enables CODEX-β to probe some new regions of parameter space for those cases in which the LLP production branching ratio from e.g. B decays is independent from its lifetime (cf. Fig. 10).
This scenario can arise e.g. in models that address the baryogenesis puzzle [66,67,[166][167][168], as described in Sect. 2.4.9. We take this model as a representative example: In our simplified phenomenological setup we consider a new particle χ , with a coupling where we assume for simplicity that the λ bsu and λ udd couplings are independent. The former is responsible for the production via e.g. B → X s χ decays, where X s here is a SM (multi)hadronic state with baryon number ±1; the latter induces the decay of χ to an (anti-)baryon plus a number of light mesons. Br[B → X s χ ] and cτ are then independent parameters. The λ udd coupling moreover must be parametrically small, to avoid exotic dinucleon decays [194,195], implying that χ must be long-lived. In Run 3, as shown in Fig. 28 we expect roughly 10 9 K L and 4 × 10 9 neutrons to enter the CODEX-β fiducial volume for an integrated luminosity of 15 fb −1 , requiring E kin > 0.4 GeV. This is desirable for calibration purposes, as explained above. An additional 4 × 10 6 antineutrons also enter, with no kinetic energy cut. We show the corresponding multitrack production rate for CODEX-β in Table 3. The multitrack background falls relatively fast with the number Fig. 29 CODEX-β reach for a long-lived χ decaying hadronically of tracks, partly because of the relative softness of the fluxes emanating from the shield and partly because the K 0 L s mainly decay to no more than two tracks. We therefore define the LLP signal region as those events with 4 or more reconstructed tracks, also requiring E kin > 0.4 GeV from expected minimum tracking requirements. The expected number of background events in the signal region is then roughly 8.5 × 10 4 per 15 fb −1 . In the actual experimental setup this number can be calibrated from a control sample with less than 4 tracks, if the ratio of both regions is taken from Monte Carlo.
For a signal benchmark with m χ = 3 GeV, the probability of decaying to 4 tracks with E kin > 0.4 GeV is roughly 15%, as estimated with Pythia 8. In Fig. 29 we show the estimated 2σ limit reach under these assumptions. For comparison, we also show the reach of the full CODEX-b detector and the reach using a ≥ 2 tracks selection, for which the background is roughly three orders of magnitude higher. (This is partially compensated for by a higher signal efficiency.) At this stage, no attempt has been made to discriminate signal from background by making use of angular variables, in particular pointing to the interaction point. In this sense the estimated reach is therefore conservative.
For completeness, we also include a preliminary estimate of the reach of LHCb itself for this signature with 15 fb −1 of data, analyzing all decay products that can be reconstructed as a track at LHCb: e, μ, p, K ± and π ± . For reconstructing a χ vertex, we first require all pairs of tracks to be vertexed not more than 1 mm away from each other. We build the position of these vertices by finding the point that minimizes the distance to each pair of tracks. We then average all the resulting vertices to generate the χ decay vertex. In order to build the B + decays, the χ vertex is required not to be more than 1 mm away from a K + . For the background, SM B + decays are considered, subject to the same reconstruction criteria. All other backgrounds are neglected.
The analysis cuts are included in Table 6. For the rest of the experimental efficiencies, we estimate a 97% efficiency Table 6 List of analysis cuts for the LHCb reach estimate for a hadronically decaying, long-lived particle (χ). SV z and SV R respectively stand for the longitudinal and transverse position of the secondary vertex, and θ is the angle of the track with respect to the beam axis  [196], and take the remaining efficiencies to be 100%. Signal and background were binned according to 4 or more tracks and 6 or more tracks in the secondary vertex. We estimate the mass resolution, σ , for 4 and 6 body-decays of the χ particle to be ∼12 and 21 MeV, respectively. This estimate is based on a study of 3 and 4-body B and D meson decays at LHCb [197][198][199][200], interpolating or extrapolating to the appropriate track multiplicity. Following a similar procedure, we estimate the B + meson mass resolution to be ∼ 24 and ∼ 36 MeV for 5 and 7 body decays, respectively. To determine the background yields, we cut in ±2σ windows around the B + and χ invariant masses. To determine the limits, we take σ bb at √ s = 14 TeV to be 500 μb [201], and the fraction of b quarks hadronising to a B + is taken to be 40% [202]. Combining this, we compute the limits on the branching ratio of the B + decay for both the 4+ and 6+ track bins. The projected limit shown in Fig. 29 is the strongest of both limits, for each cτ point. In addition, we show a rough estimate of the reach for the HL LHC, by rescaling the limits with the square root of the ratio of the luminosities.
One sees that CODEX-β and the main LHCb detector will have complementary sensitivity to this benchmark scenario, with likely better sensitivity from the LHCb search. However, it is conceivable that CODEX-β may set an earlier limit than an LHCb analysis on Run 3 data, especially given the comparatively simpler analysis required for the former. In both estimates no attempt was made to further reduce the backgrounds by means of kinematic cuts, so both projections are conservative.

Detector case studies
In this section we discuss various detector studies, as well as possible extensions of the baseline CODEX-b detector configuration. As CODEX-β is based on the same underlying technology as the full detector, most results apply directly to it as well. One caveat, however, is that CODEX-β will use a simplified front-end readout based on FPGA cards and there-fore have a significantly poorer timing resolution of around 800/ √ 12 ps per gas gap. This resolution will nevertheless be comfortably sufficient to integrate CODEX-β into the LHCb readout and to validate the detector concept.

Design drivers
The geometry and required capabilities of the tracking stations are informed by the signal benchmarks in Sect. 2. The main design drivers are: (a) Hermeticity: As discussed in Sect. 1.2, the primary motivation for CODEX-b is to cover relatively low energy signals, as compared to e.g. SUSY signatures. In many benchmark models (see in particular Sects. 2.3.2 and 2.3.4) the LLPs are therefore only moderately boosted and large opening angles are common. To achieve optimal signal efficiency, it is therefore desirable to place tracking stations on the back-end, top, bottom and sides of the fiducial volume. This is illustrated in Fig. 30 which shows the distribution of hits on the various faces of the detector for an example h → A A model (see Sect. 2.3.1) with m A = 5 GeV and A → τ τ , and the τ 's decaying in the 3-prong mode. The majority of the hits land on the back-face of the fiducial volume, but the sides, top and bottom cannot be neglected. This is despite the relatively high boost of this benchmark model, as compared to models in which, e.g., the LLP is produced in a heavy flavor decay. It may be feasible to instrument those faces that see typically fewer hits more sparsely that the nominal design outlined in Sect. 1.3. Studies to this effect are under way for a wider range of models and will inform the final design.
Finally, some tracking stations on the front face are needed to reject backgrounds from charged particles emanating from the shield, primarily muons (see Sect. 3). For these stations, resolution is less important than efficiency, and alternative technologies (e.g. scintillator planes) may be considered.

(b) Vertex resolution:
Assuming that no magnetic field will be available in the CODEX-b decay volume, a good vertex resolution is essential to convincingly demonstrate a signal. The baseline design calls for 6 RPC layers in each tracking station covering the walls, while the five internal tracking stations would have 3 RPC layers each. This is however subject to further optimization, as less RPC layers may be needed on some of the less important wall of the cube, as indicated by Fig. 30. The most important parameter is the distance to the first tracker plane, which motivates five additional tracking stations spread throughout the fiducial volume, in order to achieve vertex resolutions on the order of millimeters rather than centimeters. For signals characterized by a high boost (e.g. Higgs decays, Sect. 2.3.1) the vertex resolution also impacts the signal reconstruction efficiency, as tracks tend to merge. The reconstruction efficiency corresponding to the nominal design for tracks from such an exotic Higgs decay, h → A A , is shown in the right-hand panel of Table 7, under the requirements: • the track momentum > 600 MeV (trivially satisfied for this benchmark); • each track has at least 6 hits; and, • the first hit of each track is unique.
It is the last requirement which can fail for a highly boosted LLP.
Once the LLP is required to decay in the fiducial volume, its proper lifetime, cτ , is inversely correlated with its boost, so that a larger cτ typically implies a better reconstruction efficiency, as shown in the right-hand panel of Table 7. For particles with a boost factor of O(100) or more -roughly, cτ < 0.1 m -the nominal design nonetheless achieves O(1) reconstruction efficiencies.
(c) Track momentum threshold: The momentum threshold that can be achieved is especially relevant for two types of scenarios: • LLPs produced in hadron decays are typically relatively soft, and in order to maintain an O(1) reconstruction efficiency, the track momentum threshold should be kept roughly around 600 MeV or lower. This is illustrated by the efficiency numbers in the left-hand panel of Table 7 for the B → X s S portal of Sect. 2.3.2. Here, all losses in efficiency are because of the 600 MeV threshold that was assumed in the simulation. • Inelastic dark matter models (see Sect. 2.4.4) are characterized by an LLP decaying to a nearby invisible statethe dark matter -and a number of soft SM tracks. Given the low amount of phase space available to the SM decay products, the reach of CODEX-b for this class of models is very sensitive to the threshold that can be achieved, as shown in Fig. 18.

Studies performed
A number of initial tracking studies have been performed to validate the design requirement outlined above. They further explore a number of CODEX-b design configurations, a variety of signals, and novel methods for particle boost reconstruction [203,204]. Work is ongoing to integrate the CODEX-b detector into the LHCb simulation framework. This will facilitate the optimization of different reconstruction algorithms, as well as the study of how the information from both detectors could be integrated. For these initial studies, a simplified Geant4 [205] description of CODEX-b was implemented, following the nominal design specifications [8], but without RPC faces on the top or bottom of the detector. The active detector material was modeled using silicon planes with 2 cm 2 granularity and the same radiation length as the RPCs proposed in the nominal design. Signal events of di-electron and di-muon candidates were then passed through this simulation to determine the detector response. Within this preliminary study, no attempt was made at modeling detector noise. Fig. 31 Example opening angle reconstruction and resolution for an initial cluster and track building algorithm using 1 GeV electrons produced from a two-body decay at the front of the detector [203] An initial clustering algorithm, based on a CALICE hadronic shower clustering algorithm [206], was designed to combine nearest neighbor energy deposits within the RPC layers, passing a minimum threshold, into clusters. As expected, this clustering was found to be necessary for electrons, but had little impact on reconstructing charged pion and muon signals. After clustering, an iterative linear track-finding algorithm was run. Both back-to-forward and forward-to-back algorithms were implemented, as well as various iterative approaches. For the expected signals within CODEX-b the back-to-forward tracking algorithm was found to provide the best performance. The tracking also performed well for more complex n-body signals without a common decay vertex, e.g. emerging jets.
The opening angle reconstruction as a function of the true opening angle for electrons with momenta of 1 GeV, produced from a two-body decay at the front of the detector, is shown in Fig. 31. For opening angles above 0.2 rad the algorithm provides a flat resolution of 20% and a ratio close to unity between the true and reconstructed opening angle. The tracking efficiency for single electrons with momentum greater than 0.5 GeV is 0.95. This efficiency is also dependent upon the local detector occupancy: For two-body decays at the front of the detector with an opening angle less than 0.05 radians the efficiency rapidly drops off, as the individual tracks of the two electron candidates can no longer be resolved. For muons these efficiencies are closer to 1.0, even down to momenta of 0.5 GeV.
While momentum information is not available for individual tracks, it is still possible to estimate the boost of an nbody signal decay. For a two-body decay with small masses, the parent signal boost can be analytically approximated by assuming relativistic decay products [207]. A study was performed looking at a six-body decay of the form X → τ τ with the simplified final state decay τ − → π − π − π + ν, where the missing energy of the neutrino and the resonance structure of decay was ignored [204]. A neural net was trained on eight decay topology features: the six opening angles of the pions  32 Six-body boost reconstruction for a narrow 20 GeV resonance decay into τ τ with τ − → π − π − π + [204] and the angles of the two three-pion combinations most consistent with a τ -decay topology. The true boost versus the reconstructed boost for a narrow 20 GeV resonance is shown in Fig. 32. The boost resolution approaches 4% for resonance masses greater than 30 GeV. While this is a preliminary study, it demonstrates promise for reconstructing complex final states.

Timing
The baseline design employs RPC tracking detectors, which are expected to have a timing resolution of 350 ps per single gas gap of 1 mm [208], which corresponds to 350 ps/ √ 6 142 ps resolution per station of 6 layers. The primary function of the timing capability is to synchronize the detector with the main LHCb detector. This enables one to match LHCb events with CODEX-b events, and to characterize and reject possible backgrounds. In particular, backgrounds induced by cosmic muons will be out of time with the collisions. As explained in Sect. 3, there is a sizable flux of relatively soft, neutral hadrons emanating from the shield. These hadrons can scatter or decay in the detector, respectively in the case of neutrons and K L 's, leading to a number of slow moving tracks. For example, over a distance of 2 m between two stations, a timing resolution of ∼ 150 ps would allow one to reliably identify particles traveling at β 0.975. For the example of a π ± , this corresponds to a momentum 0.6 GeV.
The timing capabilities of the RPCs is driven by the fluctuations of the primary ionization in the gas gap, and not to the readout electronics which can be designed to achieve resolutions of the order of 10 ps. It is possible that further development of the RPC technology may allow us to push this intrinsic timing resolution below 150 ps. Figure 33 shows the degree of separation which could be achieved for the B → X s S portal (see Sect. 2.3.2) with some more optimistic assumptions for the timing resolution.

Calorimetry
Calorimetry would provide several important capabilities, notably particle identification (PID) via energy measurement and mass reconstruction, and the ability to expand the visible final states to include neutral hadrons and photons.
PID itself permits determination of the LLP decay modes, which could be crucial to identifying the quantum numbers and physics of the LLP itself. For instance, reconstructing a μ + π − π 0 final state might suggest a leptonic LLP coupling to a charged current, while even measuring the relative e + e − versus μ + μ − branching ratios could distinguish a vector from a scalar state. Moreover, the ability to reliably reconstruct the LLP mass would provide an additional crucial property of the new particle, while also providing an additional handle to reject SM LLP backgrounds. Detection and reconstruction of neutral hadrons, especially the π 0 , may permit rejection of K 0 L backgrounds from the π + π − π 0 final state. Fig. 33 Reconstructed LLP mass for different B → X s ϕ benchmarks with cτ = 10 m, for 150 ps (left), 100 ps (middle), and 50 ps (right) timing resolution [8] Fig. 34 Projected sensitivity of CODEX-b to gluon-coupled ALPs with a calorimeter element (blue, shaded) compared to the baseline, i.e. tracking only, detector (solid red). Also shown are the gains in reach coming from the ability to detect highly boosted LLPs (blue, dotdashed) and from the ability to reconstruct photon final states (purple, dotted) separately. The gray dashed line corresponds to the baseline detector reach in which highly boosted ALPs are discarded instead of being considered with 50 events of background Calorimeter elements may also improve characterization of highly-boosted LLPs, especially when their decay products start to become so collimated that it becomes difficult to separate them from single tracks given the finite track-totrack separation capabilities of the detector. Concretely, the track-to-track separation equals 2×pitch/ √ 12, for which we take 1 cm as a benchmark, similar to the expected performance ATLAS phase II RPCs. This can however be lowered if needed. With a tracker-only option, merged tracks will reconstruct as a single 'appearing' track in the tracking volume. However, for highly boosted LLPs, such as the ALPs of Sect. 2.3.3, that may have hadronic final states, such hadrons would develop energetic showers inside the calorimeter. This renders a signature strikingly different compared to e.g. lowenergy neutron scattering. (Assuming each of the ∼ 10 6 background muons passes though at least six tracking layers that are each 95% efficient, then the expected number Fig. 35 Projected sensitivity of CODEX-b to Dirac heavy neutral leptons for U τ N U eN , U μN for a tracking detector only (solid) compared to an optimistic case (dashed) with a calorimeter capable of reconstructing the N → νπ 0 final state, assuming the background differential flux of single π 0 's is negligible or reducible of appearing tracks induced from the muon background is ∼ 10 −2 per 300 fb −1 .) Energy measurement and PID may also help in the rejection of backgrounds, because they permit comparison of signal and background differential rates (in kinetic energy), rather than just the overall fluxes. Further, a calorimeter element placed on the front face of the detector, i.e. closest to the IP, may detect and absorb the flux of incoming neutrons (see Sect. 3), that might otherwise scatter and produce signallike tracks: As seen in Table 3, single track production from neutron scattering inside the detector is non-negligible, with ∼ 50 such events expected.
Diphoton final states may dominate the branching ratio of (pseudo)scalar LLPs, in particular the ALPs in the sub-GeV mass regime (see Sect. 2.3.3), such that reaches may be greatly improved with the ability to detect photons. Deposition of merged photons into the calorimeter will appear as a single highly energetic photon, to be compared with the relevant background photon fluxes shown in Fig. 23. Above  Fig. 36 Distribution of different general event variables at LHCb in events containing a h → A (μ + μ − )A (μ + μ − ) decay (labelled as signal), soft and hard QCD processes. The processes were all generated with Pythia. The variables displayed are all generated using particles reconstructible at LHCb. They correspond, from left to right and from top to bottom, to: the total number of particles in the event, the maximum p T of all these particles, their average p T , the p T of the vectorial sum of all particles and the sum of their p T 1 GeV, these fluxes are 10 −1 per 300 fb −1 . In Fig. 34 we show the ALP coupled to gluon reach for a CODEX-b setup, assuming the that the baseline design can be extended to detect diphoton final states efficiently (shaded area). For comparison we also show the tracker-only baseline design (solid red line), and the gains obtained by detecting the highlyboosted ALPs with zero background (blue dot-dashed line) or detect the purely photonic decay modes (purple dotted line). For illustration we also include the baseline reach if one completely discards the highly-boosted ALPs (gray dashed line). The CODEX-b reach attainable with a calorimeter addition is striking, both at high and low ALP masses.
Even more striking improvements are attainable in models where the ALP decays to photon pairs most of the time (as in the Physics Beyond Colliders benchmark BC9), as the first detectable final state with the baseline detector, the Dalitz mode a → e + e − γ , has a branching ratio of O(10 −2 ).
Similarly, detection and reconstruction of neutral hadrons such as the π 0 may be important in capturing dominant branching ratios in certain heavy neutral lepton mass regimes. For example, for that case that the HNL is predominantly coupled to the τ , with m N < m τ , the dominant decay mode is N → ν τ π 0 . In Fig. 35 we show the improvement in reach assuming this final state is reconstructible, compared to requiring at least two tracks. In practice, measurement of this final state requires an understanding of the background differential flux of single π 0 's. While the nominal flux of π 0 's is vanishing small, some might be produced from e.g. neutron scattering on air.

Tagging of events at LHCb
The LLPs detected at CODEX-b will be produced in events arising from pp collisions at Interaction Point 8. Therefore, these events could have information detectable at LHCb that is relevant to further help CODEX-b distinguish interesting phenomena from background. In this section we briefly review how -and how well -information from LHCb could be used to tag events at CODEX-b.
To study this tagging, we use as a benchmark a Higgs boson decaying to a pair of long-lived dark photons (see Sect. 2.3.1), which in turn decay to a pair of muons: h → A (μ + μ − )A (μ + μ − ). The A were assumed to have a mass of 1 GeV and a proper lifetime of cτ = 1 m. The decay was generated using Pythia [180] at a center-of-mass energy of √ s = 14 TeV.
The first aspect studied was the probability to detect an LLP decay both at CODEX-b and LHCb. For events in which one A falls in the CODEX-b angular acceptance, ∼ 18% have the other one in the LHCb acceptance. However, for the lifetimes of greatest interest with respect to the CODEX-b reach, hardly any of these produce any detectable decay object at LHCb. In particular, for the cτ = 1 m benchmark, the probability for such a decay is only ∼ 10 −5 . In more complicated hidden sectors however, a high multiplicity of LLPs may be produced in the same event, so that one could be detected at LHCb and the other in CODEX-b.
The second possibility under study was how the LLP production mechanism can affect the underlying event seen at LHCb. This should be specifically relevant whenever the LLP is produced through the Higgs portal, such as in our benchmark example. We performed a general comparison of how events look at LHCb at truth level, with no reconstruction involved. To compare to the signal, we generated softQCD (minimum bias) and hardQCD (bb) samples with Pythia, under the same conditions as the signal. For this comparison, we defined reconstructible particles at LHCb as those stable, charged particles that are produced in the LHCb acceptance. In Fig. 36 we show the distribution of different global variables of interest. While the figure shows a certain degree of discrimination between the different processes, more detailed studies will be needed. In particular, for this study gg fusion was chosen as Higgs production mode. Production via vector boson fusion, though having a smaller cross section, might provide more power to tag CODEX-b events at LHCb, by searching for a hard jet in the LHCb acceptance.

Outlook
The immediate priority for CODEX-b is the finalization of the design for the CODEX-β demonstrator and approval for its installation. A Letter of Intent for the full CODEX-b detector will follow this Expression of Interest in the near-term, including further developments of the detector design concept, although results from the CODEX-β demonstrator are expected to inform the final design choices for the detector. In particular, we intend to investigate in detail a realistic option for incorporating calorimetry in the CODEX-b design.