DarkBit: A GAMBIT module for computing dark matter observables and likelihoods

We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments (gamlike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments (DDcalc), and provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nuLike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool (GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a standalone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes (DarkSUSY and micrOMEGAs), and application of DarkBit's advanced direct and indirect detection routines to a simple effective dark matter model.

and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a standalone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes (DarkSUSY and micrOMEGAs), and application of Dark-Bit's advanced direct and indirect detection routines to a simple effective dark matter model.

Introduction
The identity of dark matter (DM) remains one of the most vexing puzzles of fundamental physics. After decades of intense effort, its cosmological abundance has been determined at a precision of better than one percent [1], but so far no experiment has reported any clear evidence of its non-gravitational interactions. Despite these null searches, the leading hypothesis remains that DM consists of a new type of elementary particle [2]. Out of the many possibilities [3][4][5], weakly interacting massive particles (WIMPs) are often argued to be particularly appealing candidates, both because they almost inevitably appear in well-motivated extensions of the Standard Model -like supersymmetry [6] or universal extra dimensions [7] -but also because their thermal production in the early Universe naturally results in a relic abundance in broad agreement with the observed DM density today. Traditionally, the particle identity of DM has been tested with three different strategies: i) by trying to directly produce it in accelerator searches, ii) by performing direct searches for recoiling nuclei caused by collisions with passing DM particles in large underground detectors, or iii) by indirect searches for the debris from DM annihilation or decay in the Sun or outer space. Although these approaches are particularly suitable for WIMPs, several other candidates can be probed by some of these methods as well. More recently, another approach has emerged that is particularly relevant for DM scenarios beyond the standard WIMP case, e.g. for self-interacting DM, namely to iv) use astrophysical probes related to the distribution of matter on galactic and cosmological scales [8,9]. For each of these methods, an immense amount of experimental data is expected during the next decade(s). In order to extract the maximal amount of information and narrow down the properties of a given DM candidate, or exclude it, it is mandatory to combine these measurements in a statistically rigorous way.
With this article we introduce DarkBit, a new numerical tool for tackling this task. DarkBit calculates DM observables and likelihoods in a comprehensive and flexible way, making them available for both phenomenological DM studies and broader Beyond-the-Standard Model (BSM) global fits. In particular, the first release of DarkBit contains up-to-date limits and likelihoods for indirect DM searches with gamma rays and neutrinos, for the spin-dependent and spin-independent cross-sections relevant to direct detection, and for the relic density. In order to increase the efficiency of observable and likelihood calculations by reusing as much code as possible, DarkBit relies on highly flexible data structures that can easily accommodate the specific needs of most particle models. Examples include the Process Catalogue (Sec. 6.3.1), which contains all relevant particles and interaction rates, a general halo model, and a fully model-independent framework to calculate the relic density.
DarkBit is designed as a module within the GAM-BIT framework [10][11][12][13][14]. Where we introduce key terms with specific meanings in the context of GAMBIT, we highlight and link them to the glossary of standard GAMBIT terms at the end of this paper. GAMBIT defines a series of physics modules, each consisting of a collection of module functions. Each module function is able to compute an observable, a likelihood or some intermediate quantity required in the calculation of other observables or likelihoods. At runtime, the user informs GAMBIT of the observables they want to compute, the theoretical model and parameter ranges over which those observables should be calculated, and how they would like GAMBIT to sample the parameter space. GAMBIT then identifies the module functions necessary for delivering the requested observables and arranges them into a dependency tree, describing which module functions must be run, and in what order. It then chooses parameter combinations, passes them to the Catalogue is initialised. This contains the relevant particle physics processes to infer dark matter annihilation spectra. The effective annihilation rate relevant for relic density calculations can (in simple cases) be inferred from the process catalogue, or it can be set directly. WIMP-nucleon couplings are also set depending on the model. All information is channelled into various likelihood routines. The two-letter insets indicate what backend codes can be used: DarkSUSY (DS), micrOMEGAs (MO), gamLike (GL), nulike (NL) and DDCalc (DC). Currently the neutrino spectra can calculated in DarkBit using only DarkSUSY, while the gamma-ray spectra can be taken from either DarkSUSY or micrOMEGAs.
module functions, and outputs the resulting samples. Module functions can call on additional functions from external backend codes, which GAMBIT also sorts into the dependency tree. Rules that dictate how different module functions, models and backends may rely on each other can be defined in either the source code or input file, allowing the user to force the resulting dependency tree to obey arbitrarily detailed physical conditions and relations.
One of the main design features of DarkBit, as compared to other DM codes, is its extremely modular structure. This allows users to interface essentially any external code in a straightforward way, allowing them to extract any given functionality for use within DarkBit. Examples of such backends used in the first release include DarkSUSY [15] and micrOMEGAs [16] (for various direct and indirect rates, as well as Boltzmann solvers), DDCalc (introduced in Sec. 5.2; for direct detection rates and likelihoods), gamLike (introduced in Sec. 6.2; for gamma-ray likelihoods) and nulike [17] (for neutrino likelihoods). In a situation where several backends provide the same functionality, the user can easily switch between them -or instead use powerful Dark-Bit internal routines, like an on-the-fly cascade decay spectrum generator (which we have implemented from scratch). On the technical side, DarkBit makes use of dynamic C++ function objects to facilitate the manipulation and exchange of real-valued functions between different backends and GAMBIT; these are implemented in the daFunk library (Appendix B).
In standalone mode, DarkBit can be used for directly computing observables and likelihoods, for any combination of parameter values in some underlying particle model. When employed as a GAMBIT module, it can be used to do this over an entire parameter space of a chosen BSM model, providing various independent likelihoods for automatic combination with those from other GAMBIT modules in a statistically consistent way. This usage mode makes it possible to not only simultaneously include all possible constraints from different observation channels, but also to incorporate the full uncertainties arising from poorly-constrained astrophysical or nuclear parameters in the scan, by treating them as nuisance parameters. Typical examples include nuclear form factors and halo model uncertainties.
This article is organised as follows. In Section 2 we give an overview of the physics, observables and likelihoods contained in DarkBit, along with the basic module structure. Section 3 details the DM halo models that we employ in DarkBit. In Sections 4, 5 and 6 we go through DarkBit's abilities in relic density calculations, direct and indirect detection, respectively. In particular, Section 5.2 introduces the new direct detection phenolikelihood code DDCalc and Section 6.2 introduces the new gamma-ray indirect detection likelihood code gam-Like. We show validation tests and illustrative examples of typical DarkBit usage in Section 7. We continue with an outlook on planned code expansions in Section 8, and conclude in Section 9. In Appendix A we provide a quick-start guide to installing DarkBit and running a simple test example. Appendix B introduces the new dynamic functions library daFunk. Appendix C provides a glossary containing the GAMBIT terms used in this paper.
The source code for DarkBit is available from gambit.hepforge.org, and is released under the terms of the standard 3-clause BSD license. 1

Module overview
GAMBIT is built around the ideas of modularity and re-usability. All calculations required to get from experimental data and model parameters to likelihood values and observables are performed in separate GAM-BIT module functions. Each module function is able to calculate exactly one quantity, its capability. The type of this capability can be just about anything, from a simple integer to any complex C++ structure required to carry the result of the calculation. Examples of capabilities are: model parameters, particle spectra, experimental data, and the values of likelihood functions. Most module functions will also have dependencies on capabilities that were calculated by other module functions. These dependencies will be automatically resolved at run time, based on choices of the user about what particle physics model to analyse, what observables to include etc. For details, we refer the reader to the main GAMBIT paper [10].
DarkBit is one of the central GAMBIT modules, and essentially a collection of module functions that compute dark matter observables and likelihoods. Some of the basic elements of DarkBit and their relations are sketched in Fig. 1 (the full dependency tree with all module functions is far more complex than this sketch). Based on the model parameters of a particular point in the scan, DarkBit sets up several central computational structures (like the Process Catalogue, the effective annihilaton rate, and WIMP-nucleon couplings), which are used for relic density calculation (see Sec. 4), the calculation of direct detection constraints (Sec. 5), and the calculation of gamma-ray and neutrino yields (Sec. 6).
In many cases, and as indicated in Fig. 1, the calculations performed by DarkBit module functions build on 1 http://opensource.org/licenses/BSD-3-Clause. Note that fjcore [18] and some outputs of FlexibleSUSY [19] (incorporating routines from SOFTSUSY [20]) are also shipped with GAMBIT 1.0 and 1.1. These code snippets are distributed under the GNU General Public License (GPL; http://opensource.org/licenses/GPL-3.0), with the special exception, granted to GAMBIT by the authors, that they do not require the rest of GAMBIT to inherit the GPL. functionality of external independent codes like Dark-SUSY or micrOMEGAs. These codes provide a lot of functionality that can often be used beyond their original scope (examples are Boltzmann solvers, tabulated particle yields, routines to calculate J-values, etc). From the perspective of GAMBIT, these codes are backends. Technically, they are coupled to GAMBIT by compiling them as shared libraries, which are loaded by GAMBIT at runtime. The interface to these backends is provided by convenient frontends, which specify the form and subset of functionality of the backend that is accessible by GAMBIT. For details we again refer to Ref. [10].
Although the main purpose of DarkBit is to provide dark matter-related functionality for global scans with GAMBIT, it can also be used as a standalone code. In fact, most of the functionality of DarkBit could also be used outside of scans, e.g. implemented in a command line tool, if so desired. We will show a few examples for this below in Sec. 7.2.

Background
All of the direct and indirect detection observables that can be calculated in DarkBit are strongly dependent on the spatial distribution of DM particles, and often velocities as well. Predicted event rates in direct detection experiments and the rate of DM annihilation in the Sun depend on the local density of dark matter in the Milky Way. Indirect detection signals from annihilations to gamma rays and neutrinos (and charged cosmic rays) depend on the spatial DM distribution in the source being observed. In order to assure consistency in the calculations of these observables, GAMBIT contains halo models that describe the density and velocity of DM in the Milky Way and other astronomical objects.

Density profiles
Multiple forms of halo density profiles exist in the literature. Early analytic calculations of infalling dark matter onto collapsed density perturbations showed that the dark matter density should approximately scale like r −2 [21], the same behaviour that one would expect for a halo with a constant velocity dispersion (or equivalently, constant temperature). This led to the modelling of the dark matter distribution by the modified isothermal profile ρ(r) = 2ρ s 1 + (r/r s ) 2 , where r s is a scale radius, and ρ s is the density at r = r s . Subsequent N -body simulations of the gravitational interactions of dark matter lead Navarro, Frenk, and White (NFW) to conclude that the structure of dark matter halos could be described by a cusped profile of the form [22] They showed that the dark matter density profile appears to be roughly universal, meaning that for halos of a range of sizes from dwarf galaxies to galaxy clusters, the form of the profile is the same. Other groups [23,24] found better fits to simulation data could be obtained by slightly modifying the NFW profile. To take these modifications into account, it is common to write the halo profile in the following form: Here, γ describes the inner slope of the profile, β the outer slope, and α the shape in the transition region around r ∼ r s . More recently, it has been pointed out that a better fit to N -body simulations can be given by what is known as an Einasto profile [25], named after Einasto's use of the profile to model the mass distribution of galaxies [26]. In this model the logarithm of the slope varies continuously with radius, leading to a density profile of the form

Velocity distribution
The velocities v of dark matter particles in a halo are usually taken to follow the Maxwell-Boltzmann distribution for an ideal gas at constant temperature. This distribution, truncated to reflect the fact that any particle with a speed beyond the escape velocity v esc will leave the halo, takes the form where v 0 is the most probable speed. Note that the above formula is only valid when |v| < v esc ; we assume that the probability of a speed higher than v esc is 0. The normalisation that corrects for the truncation, N esc , is given by Our Milky Way galaxy rotates in a dark matter halo that is essentially stationary. The velocity of the Earth in the halo is given by the sum Here v LSR = (0, v rot , 0) is the motion of the Local Standard of Rest in Galactic coordinates, v ,pec = (11, 12, 7) km s −1 is the well known peculiar velocity of the Sun [27], and V ⊕ (t) is the velocity of the Earth relative to the Sun. The magnitude of V ⊕ is well measured at 29.78 km s −1 [28], and its changing direction is expected to give rise to an annual modulation of scattering rates in direct detection experiments [29]. The distribution of velocities u of dark matter particles in the Earth's frame is given by Assuming that the density profile of the halo surrounding the Milky Way is smooth and spherical like those discussed above, v 0 is approximately the same as the rotation speed v rot of the galactic disk (for an isothermal density profile it is exactly the same, whereas for the NFW profile of Eq. 2, the two values can vary by over 10% [30]).

Halo models and associated capabilities
In GAMBIT, the radial distribution ρ(r) of dark matter in the Milky Way, the local density ρ 0 , the distance from the Sun to the Galactic centre r sun , as well as the local velocity distribution f (u) are simultaneously described by a given halo model. In the first release, we provide two main halo models: Halo_gNFW and Halo_Einasto. The former corresponds to the generalised NFW profile with parameters ρ s , r s , α, β and γ as defined in Eq. 3, together with the Maxwell-Boltzmann velocity distribution given by Eqs. 5-8, specified by the model parameters v 0 , v rot and v esc 2 . Lastly, the model contains r sun and ρ 0 as additional free parameters. Analogously, the Halo_Einasto model describes the density profile given by Eq. 4, with free parameters ρ s , r s and α, and otherwise is identical to the Halo_gNFW model. Note that with appropriate choices of α, β, and γ, the density profile in the Halo_gNFW model is equivalent to the isothermal profile of Eq. 1 (α = 2, β = 2, γ = 0) or the NFW profile of Eq. 2 (α = 1, β = 3, γ = 1).
In these two halo models, the density profile ρ(r) relevant for calculating the gamma-ray flux induced by dark matter annihilations is completely decoupled from the local properties of dark matter (in particular from the local density ρ 0 ), which set the event rate in direct detection experiments and neutrino telescopes. However, it is also possible to directly link the density profile to the local density by enforcing the relation ρ(r sun ) ≡ ρ 0 . For the case of the generalised NFW profile, this is realised by two child models of Halo_gNFW, denoted Halo_gNFW_rho0 and Halo_gNFW_rhos. When employing the former model, the user need only specify the value of the local density ρ 0 , which is then internally converted to the corresponding value of the scale density ρ s using Eq. 3. Conversely, in the latter model one specifies ρ s , and ρ 0 is determined by GAMBIT. A completely analogous choice is possible for the Einasto profile via the halo models Halo_Einasto_rho0 and Halo_Einasto_rhos.
In order to communicate the astrophysical properties of the Galactic dark matter population to the module and backend functions relevant for direct and indirect searches, GAMBIT employs two central capabilities: (1) GalacticHalo, which is of type GalacticHaloProperties, a data structure containing (i) a daFunk::Funk object (Appendix B), which describes the radial density profile as a function of the radius "r", and (ii) the distance r sun from the Sun to the Galactic centre. This capability is required in particular by the gamLike backend for the computation of gamma-ray fluxes from dark matter annihilations within the Milky Way. The other capability describing the dark matter halo is (2) LocalHalo, which is of type LocalMaxwellianHalo. This object is simply a container for the parameters relevant to direct detection and capture in the Sun, i.e. the local density ρ 0 as well as the velocity parameters v 0 , v rot and v esc .
Depending on the halo model in use, the capability GalacticHalo can be provided by the module functions GalacticHalo_gNFW or GalacticHalo_Einasto; for all halo models, the LocalHalo capability is obtained through the module function ExtractLocalMaxwellianHalo (see also Table A3). The rest of the GAMBIT code is designed such that only the module functions providing the capabilities GalacticHalo or LocalHalo explicitly depend on a halo model, while all other module and backend functions requiring access to the astrophysical properties of dark matter instead depend on these capabilities. This setup allows GAMBIT to be straightforwardly extended to incorporate new halo models, in particular to density profiles different from the generalised NFW and Einasto parameterisations.

Likelihoods
GAMBIT provides several likelihood functions for the (typically quite large) uncertainties of the parameters included in the halo models. These are summarised in Tables 1 and A3. For the local dark matter density ρ 0 , we implement the likelihood as a log-normal function: where σ ρ0 = ln(1 + σ ρ0 /ρ 0 ). The methods that have been used to determine ρ 0 can be roughly categorised into two approaches (see [31] for a review). Local measures (e.g [32]) use the kinematics of nearby stars to find ρ 0 , whereas global measures (e.g [33][34][35][36]) extrapolate the local density from the galactic rotation curve. The latter method often leads to results with smaller errors than the former, but they are very much dependent on assumptions about the shape of the halo [37]. There appears to be a growing consensus that ρ 0 ≈ 0.4 GeV/cm 3 , so by default we setρ 0 to this value. We take σ ρ0 to be 0.15 GeV/cm 3 , to represent the range of determinations of the local density in the literature (see e.g. [38]).
We also provide likelihood functions for v 0 , v rot , and v esc . For these parameters, we assume that the likelihood follows a standard Gaussian distribution For the disk rotational velocity, by default the central value and error of the distribution are set to 235 ± 20 km s −1 , based on measurements of galactic masers [39,40]. To take into account the possible discrepancies between v 0 and v rot due to variances in the density profile away from the simple isothermal model, we have implemented an independent Gaussian likelihood for v 0 with the same parameters as v rot . Finally, for the escape velocity we use v esc = 550 ± 35 km s −1 based on measurements of high velocity stars in the RAVE survey [41]. The likelihood functions discussed in this section are listed in Tab. 1, along with their corresponding capabilities. The central values and errors for these likelihoods can be adjusted by setting the YAML options param_obs and param_obserr respectively, where param is the name of the parameter (e.g. to override the default likelihood for ρ 0 one would set rho0_obs and rho0_obserr).
Currently, no specific likelihoods are included to constrain the Galactic halo profile parameters. This can, however, easily be done by specifying appropriate parameter ranges and priors in the GAMBIT initialisation file. For typical standard values we refer the reader to Ref. [42]; for recent kinematical constraints we refer to Ref. [36].

Background
To calculate the relic density we need to solve the Boltzmann equation for the number density n of dark matter particles, which in general can be written as [43] dn dt where n eq denotes the equilibrium density, and H the Hubble constant. Furthermore, the thermal average of the effective annihilation cross section is defined as where p eff = 1 2 s − 4m 2 1 is the effective momentum in the centre-of-momentum frame of the lightest DM species (assumed to be particle 1). K 1 and K 2 are modified Bessel functions, g i is the number of internal degrees of freedom of co-annihilating particle i, m i is its mass, T is the temperature, and s is the centre-of-momentum energy squared. Finally, W eff is the effective annihilation rate, given by Here, is the effective momentum for ij annihilation, with p 11 = p eff , and W ij is related to the annihilation cross section by where E i is the energy of particle i.
The main particle physics-specific quantity, to be provided by the respective particle model, is thus the invariant rate W eff (p eff ). While its computation can be computationally expensive for complex models, it is independent of temperature; it is thus usually advantageous to tabulate this function (as done in e.g. DarkSUSY).
The integration of the Boltzmann equation, Eq. 11, can then proceed in a model-independent way and the final relic density of the DM particle is given by Here, n 0 is the asymptotic value of n(t → ∞) as expected today and ρ crit = 3H 2 0 /8πG. Note that there are two equivalent ways of dealing with a situation where there is more than one DM particle with the same mass m χ -like for example for Dirac particles where there would be both DM particles and antiparticles. The first option is to treat the DM particles as separate species; the relic density given in Eq. 16 then only refers to the density of one of the species. Alternatively, all DM particles can be treated as a single effective species with a correspondingly larger value of g 1 in the definition of W eff in Eq. 15; for the case of Dirac DM, e.g., one would have to replace g 1 → 2g 1 . In this case, the expression in Eq. 16 will refer to the total DM density.
Numerically, the integration of the Boltzmann equation is simplified by changing variables from n and t to the dimensionless quantities x ≡ m χ /T and Y ≡ n/s, where s now denotes the entropy density of the heat bath [44]. In DarkSUSY, the thermal average in Eq. 12 is done by using an adaptive Gaussian method, employing splines to interpolate between the tabulated points of W eff and taking special care around the known locations of thresholds and resonances. The actual integration of the Boltzmann equation is then performed via an implicit trapezoidal method with adaptive stepsize; see [15] for further details.

Interfaces to DarkSUSY and micrOMEGAs
Here we discuss the general features of the GAMBIT interface to DarkSUSY and micrOMEGAs, two of the most important backends for DarkBit.
DarkSUSY 3 [45] has been fully implemented into the modular framework of GAMBIT, with the calculation of all observables broken down into discrete parts that can be easily replaced with calculations from other backends. DarkSUSY is used by GAMBIT to obtain multiple theoretical quantities, including W eff , the DM relic density (through a fully numerical solution of the Boltzmann equation), effective couplings between nucleons and WIMPs, the rate of dark matter capture in the Sun, and the spectra of gamma rays from DM annihilation.
If DarkSUSY is used as a backend for DarkBit, it is important that it be correctly initialised at each point in the scan. The most basic model-independent initialisation happens in the DarkSUSY backend initialisation function. However, in order to ensure that MSSM observables are also calculated correctly, DarkBit module functions that rely on the model-dependent capabilities of DarkSUSY have an auxiliary dependency on the capability DarkSUSY_PointInit (see Table A13). It is then the responsibility of the function that provides this capability to initialise DarkSUSY correctly. A separate capability DarkSUSY_PointInit_LocalHalo is provided to initialise the DM Halo model in DarkSUSY for those backend functions where it is necessary. The full set of relic density capabilities, functions and dependencies in DarkBit can be found in Tables A4 and A5.
MicrOMEGAs 4 [46][47][48][49] can be used by DarkBit to obtain many of the same quantities as DarkSUSY, including the relic density, WIMP-nucleon effective couplings, and gamma-ray spectra. However, the interface of mi-crOMEGAs with the GAMBIT framework is currently more coarse-grained than the one for DarkSUSY. An illustrative example of this is the calculation of the relic density: in the case of DarkSUSY, GAMBIT calls different DarkSUSY functions for each part of the calculation, including the calculation of W eff , and the solution of the Boltzmann equation, whereas with micrOMEGAs, the entire calculation is done by calling one function.
As micrOMEGAs is not set up to take information from the DarkBit Process Catalog, a dark matter model must be implemented in micrOMEGAs following the normal method for the code. This consists of writing a compatible set of CalcHEP [50] model files and then compiling micrOMEGAs with these files. All of the relevant objects, both model specific and then generic, are then combined into a shared library that is used by the micrOMEGAs frontend. As there are model-specific functions in the library, separate libraries are needed for each particle physics model. GAMBIT comes with two micrOMEGAs frontends: one for the MSSM [51,52] and one for the scalar singlet DM model. The latter is not included by default with micrOMEGAs, so we have included the CalcHEP files needed to implement it with the GAMBIT distribution.
The intialisation functions for these frontends, MicrOmegas_MSSM_3_6_9_2_init and MicrOmegas_SingletDM _3_6_9_2_init, both have the YAML options VZdecay and VWdecay. These control how micrOMEGAs treats annihilations to 3-body final states via virtual W and Z bosons. If these options are set to 0 these processes are ignored, while a value of 1 causes them be included in the case of DM self-annihilations, and with a value of 2 they are taken into account for all coannihilation processes as well. To initialise the backend, MicrOmegas_SingletDM_3_6_9_2_init passes information about couplings, masses, and decays directly to mi-crOMEGAs, while MicrOmegas_MSSM_3_6_9_2_init writes an SLHA1 file to disk, which is subsequently read by the lesHinput function of MicrOmegas (this function is not compatible with the more general SLHA2 format). The latter function also has the YAML option internal_decays, which, when set to false, causes information from the GAMBIT decay table to be passed via the DECAY blocks of the SLHA1 file to micrOMEGAs; otherwise micrOMEGAs calculates widths internally. Starting with GAMBIT 1.1.0, this option is set to false by default.

Relic density implementation in DarkBit
The general structure and main capabilities of relic density calculations in DarkBit are summarised in Fig. 2. As explained above, the most important particle physics input needed to calculate the DM relic density is the effective invariant annihilation rate W eff . In DarkBit, this is represented by the capability RD_eff_annrate.
Currently, there are two functions that provide this capability (Table A4), though further user-defined functions can easily be added. The first function, RD_eff_annrate_SUSY, returns W eff for SUSY models using DarkSUSY as a backend. It depends on the capability RD_eff_annrate_DSprep, which ensures that DarkSUSY is correctly set up and configured to provide W eff for neutralino annihilation. In this case two boolean options can be provided in the YAML file: CoannCharginosNeutralinos and CoannSfermions. These specify whether chargino, neutralino or sfermion coannihilations are taken into account. These options default to true, but the relevant coannihilations can be disabled to speed up the relic density calculation (at the expense of accuracy). 5 Another important option to steer the numerical performance is CoannMaxMass (default 1.6), which specifies up to what mass, in units of the DM mass, coannihilating particles are taken into account when calculating σ eff v . The second alternative is to determine W eff directly from the Process Catalogue (Sec. 6.3.1). The corresponding function obviously has TH_ProcessCatalogue as a dependency, as well as DarkMatter_ID, and can be used for models where coannihilations are not important. Here, the identity of the dark matter particle, as it exists in the GAMBIT particle database (see Ref. [10]), is provided by the capability DarkMatter_ID (Table A2).
The main capability of the relic density part of Dark-Bit is RD_oh2, which returns Ω χ h 2 for a given model point. If the user wishes, the result can be expressed as a fraction of the total, measured DM density. This capability is RD_fraction, and is provided by three different module functions: RD_fraction_one, RD_fraction_leq_one and RD_fraction_rescale. The choice of function dictates whether indirect and direct detection routines should use the observed relic density or the one calculated for the particular parameter point. RD_fraction_one simply returns 1, and causes the observed relic density to be used in all direct and indirect detection routines. RD_fraction_leq_one returns the lesser of 1 and the ratio of the computed relic density to the observed value. RD_fraction_rescale always returns the computedto-reference ratio, regardless of whether or not it exceeds 1. The latter two functions accept an option "oh2_obs", which defaults to 0.1188; this is the observed value of Ωh 2 to use for rescaling.
DarkBit currently includes three functions that provide RD_oh2 (Table A5). Two of those (RD_oh2_DarkSUSY and RD_oh2_MicrOmegas) are direct calls to the unmodified relic density routines of DarkSUSY and mi-crOMEGAs, with all particle model parameters initialised as per default in the respective backend code. RD_oh2_DarkSUSY is only compatible with the MSSM, while RD_oh2_MicrOmegas can be used with any model implemented in micrOMEGAs. This latter function has two options: fast, which determines the numerical accuracy of the calculation (int; 0: accurate [default], 1: fast), and Beps (double; default 10 −5 ), which corresponds to a minimum value for the parameter If B is less than the Beps value for a certain process, that process will not be included in the calculation of the effective annihilation rate (this serves the same purpose as CoannMaxMass for DarkSUSY). The third function (RD_oh2_general) is a general Boltzmann solver, which again largely relies on various subroutines provided by the DarkSUSY backend, but not on any DarkSUSYspecific initialisation of particle parameters (e.g. the setting of sparticle masses and couplings): the option fast (int; 0: accurate, 1: fast [default]) again steers the numerical performance of the backend code. In order to calculate σ eff , c.f. Eq. 12, the Boltzmann solver needs not only the invariant rate, but also the internal degrees of freedom and masses of all (co)annihilating particles. For a high-precision result of this integral, one will in general also need to know the exact location of thresholds and resonances in W eff . RD_oh2 therefore depends on the capability RD_spectrum_ordered which contains all this information, ordered by increasing p eff . RD_spectrum_ordered in turn depends on the capability RD_spectrum which contains the same information, except for coannihilation thresholds, but not necessarily in an ordered form. Presently, RD_spectrum can be provided in two ways, either by the Process Catalogue (if coannihilations are not important; see Sec. 6.3.1) or by DarkSUSY.
Lastly, the capability of the likelihood function constraining the relic density is lnL_oh2. This capability is provided by two module functions. First, lnL_oh2_Simple is a Gaussian likelihood that implements the observational limits from Ref.
[1] (Ω χ h 2 = 0.1188 ± 0.0010 at 1σ). In addition to the experimental error, we also take into the account possible theoretical errors in the calculation of the relic density. We assume that the likelihood for the prediction for Ω χ h 2 is a Gaussian distribution centred around the calculated value with a standard deviation that is taken, by default, to be 5% of that calculated value. This error is conservative for most parameter combinations, and underestimates the O(50%) corrections that can occur due to loop corrections in a few specific scenarios [53][54][55][56][57]. The choice is thus a pragmatic compromise that represents the best that can be done with a single uncertainty. The user is free to change it, and we emphasise the importance of the user choosing an error appropriate for the particular model he or she is studying. The exact form of the Gaussian likelihood for the model parameters is determined by the YAML option profile_systematics, which determines whether the prediction for Ω χ h 2 is profiled or marginalised over (by default this option is set to false, corresponding to marginalisation). The mean value, the experimental error and the theoretical error can be changed with the YAML file options oh2_obs, oh2_obserr, and oh2_fractional_theory_err, respectively. For the exact form of the likelihood function and the details of its derivation, we refer the reader to section 8.3.1 and 8.3.2 of [10].
Alternatively, lnL_oh2_upperlimit implements the observational constraint as a one-sided limit, leaving open the possibility that a certain DM candidate makes up only a fraction of the total abundance. The likelihood function is roughly a Gaussian similar to the one described in the previous paragraph for predictions greater than the observed relic density and flat for predictions below the observed value. Here the exact form is again dependent on the YAML parameter profile_systematics (see section 8.3.3 and 8.3.4 of [10] for many more details). The other three YAML parameters for this function are the same as before.

Background
Given that the solar system sits within a DM halo, DM particles are expected to pass through Earth continuously. If DM has any ability to interact with regular matter at all, it will occasionally scatter on terrestrial nuclei. Direct detection experiments [58] search for these scattering events, by looking for nuclear recoils in large volumes of inert target material placed in ultra-clean environments deep underground.
In natural units, the differential rate of recoil events in a direct detection experiment is where m χ is the WIMP mass, ρ 0 is the local DM mass density, f (v, t) is the three-dimensional, time-dependent WIMP velocity distribution, dσ dq 2 (q 2 , v) is the velocitydependent differential cross-section, and q 2 = 2m nuc E is the momentum exchanged in the scattering process (for a nucleon mass m nuc and recoil energy E). Numerical values of this differential rate are typically expressed in cpd (counts per day), per kg of target material, per keV recoil energy. Most direct search detectors contain more than one isotope, in which case the differential rate is given by a sum over Eq. 18 for each isotope, weighted according to the mass fraction of the isotope in the detector.
The expected number of signal events in an analysis by a direct search experiment is given by where M is the detector mass and T is the exposure time. The detector response function φ(E) describes the fraction of recoil events of energy E that will be observed within some pre-defined analysis region. The precise definition of such an analysis region depends on the experiment under consideration. In the simplest case it would be given by a lower and an upper bound on the reconstructed energy, so that the response function φ(E) can be calculated in terms of the energy resolution of the detector and the various trigger efficiencies. The energy range over which the experiment is sensitive is then encoded within φ(E), so that there is no need to impose a finite upper or lower cutoff in the integral in Eq. 19. Some experiments implement more elaborate analyses, imposing further cuts on observables that depend on the recoil energy in a more complicated way. All of these possibilities can be captured by an appropriate function φ(E), because the detector response is always independent of the nature of the particle interaction of the WIMP with nuclei, which is contained in the differential event rate. The detector response φ(E) can therefore be tabulated in advance for different analyses and re-used for any WIMP model. Eq. 19 can be generalised to experiments with more than one analysis region (e.g. binned event rates) by defining a separate function φ i (E) for each analysis region.
For many WIMP candidates (which we refer to as χ), the dominant WIMP-quark interactions arise from a combination of a scalar (χχqq) or vector (χγ µ χqγ µ q) coupling, which give rise to a spin-independent (SI) crosssection, and an axial-vector coupling (χγ µ γ 5 χqγ µ γ 5 q), which gives rise to a spin-dependent (SD) cross-section (for a complete list of possible operators see Ref. [59]). In both of these cases, the matrix element involved has no intrinsic dependence upon either the momentum exchanged in the collision, nor on the relative velocity of the DM and the nucleus. In such cases, it is convenient to write the scattering cross-section as a simple product of a cross-section σ 0 defined at zero-momentum-transfer, and a form factor F 2 (q) that accounts for the finite size of the nucleus. For such velocity and momentumindependent interactions, the differential cross-section becomes where Θ is the Heaviside step function, µ is the WIMPnucleon reduced mass, q max = 2µv is the maximum momentum transfer in a collision at a relative velocity v, and the velocity dependence is entirely due to kinematics rather than the interaction. The requirement that q ≤ q max for an interaction to be kinematically possible translates into a lower limit v ≥ v min = m nuc E/2µ 2 in the integral over the WIMP velocity distribution (Eq. 18). The total WIMP-nucleus differential cross-section is the sum over the SI and SD contributions, each with its own form factor. The zero-momentum cross-section σ 0 for SI WIMPnucleus interactions is where Z is the atomic number and A is the atomic mass number. Z and A − Z are respectively the number of protons and neutrons in the nucleus, and f p and f n are the effective couplings to them. The latter depend on both the precise nature of the interaction between WIMPs and quarks (and/or gluons), and on the contents of the proton and neutron. We note that the alternative normalisation involving G p SI = 2f p and G n SI = 2f n is often found in the literature, where G N SI with N = n, p are the G F -like effective four-fermion coupling constants in the case of scalar interactions. The micrOMEGAs manual, meanwhile, uses λ N = 1 2 G N SI . The nucleon contents are described by the nuclear hadronic matrix elements, discussed in detail in Sec. 5.3.2 below.
For most DM candidates with scalar couplings, the proton and neutron SI cross-sections are roughly the same, so f n f p . For identical couplings (f n = f p ), the SI cross-section reduces to where µ p is the WIMP-proton reduced mass. Direct detection experiments are often designed to use heavy nuclei, as the SI cross-section grows rapidly with A.
The SI form factor is essentially a Fourier transform of the mass distribution of the nucleus, and it is reasonably approximated by the Helm form factor [60,61], where s 0.9 fm and r 2 n = c 2 + 7 3 π 2 a 2 −5s 2 is an effective nuclear radius with a 0.52 fm and c 1.23A 1/3 − 0.60 fm. Further details on SI form factors can be found in Refs. [61,62].
SD scattering is only present for detectors that contain isotopes with net nuclear spin. This generally requires the nucleus to possess an unpaired proton and/or neutron in its shell structure. The relevant WIMPnucleon cross-section is where G F is the Fermi constant, J is the spin of the nucleus, S p and S n are the average spin contributions from the proton and neutron groups respectively, and a p and a n are the effective couplings to the proton and the neutron in units of 2 √ 2G F . Similarly to f p and f p , a p and a n depend on both the WIMP-quark interaction and on the relative contributions of different quark flavours to the nucleon spin; the latter is discussed further in Sec. 5.3.2. As in the spin-independent case, alternative normalisations can be found in the literature. The DarkSUSY manual, for example, refers to In addition, whilst we here use a N and G N SD to distinguish the two notations, a N is frequently used within the literature for both cases.
Unlike the SI case, the two SD couplings a p and a n differ substantially in many theories. Individual experiments typically only strongly constrain either a p or a n , as most detector materials do not contain isotopes with both unpaired neutrons and protons; experimental results on the SD cross-section are therefore generally presented in terms of σ SD,p = σ SD (a n = 0) or σ SD,n = σ SD (a p = 0).
The SD form factor is given in terms of the structure function S(q) normalised so that F 2 (0) = 1, with S(q) = a 2 p S pp (q) + a 2 n S nn (q) + a p a n S pn (q) .
In the limit q → 0, the functions S nn (0) and S pp (0) are proportional to the expectation values of spins of the the proton and neutron subsystems [63,64],

DDCalc
The traditional presentation of results from direct searches for dark matter is an exclusion curve for the SI or SD WIMP-nucleon scattering cross-section, as a function of the WIMP mass. This invariably comes with some rather specific restrictions: 1. the exclusion is given at only a single confidence level (CL; traditionally 90%), 2. f p = f n (often somewhat loosely referred to as 'isospin conservation') is assumed, 3. either a p = a n = 0 (pure SI coupling), or f p = f n = (a p or a n ) = 0 (pure SD p or SD n coupling) is assumed, 4. it is assumed that the local density and velocities of DM follow the Standard Halo Model (cf. Sec. 3), 5. a specific set of nuclear form factors F 2 (q) is adopted, and 6. the nuclear parameters are fixed to assumed values when calculating f p , f n , a p and a n for any comparison against theory predictions.
These restrictions are all problematic when trying to recast direct search results, as one must go from the idealised effective WIMP frameworks in which they are presented to real constraints on actual theories. Ultimately, we are interested in the overall degree to which a model with some arbitrary combination of couplings f p , f n , a p and a n agrees or disagrees with data, not merely which side of a 90% CL curve it lies on under different limiting approximations about f p , f n , a p and a n . Ideally, this would also include the impacts of systematic errors on that exclusion, due to uncertainties from the halo, nuclear and form-factor models that one must assume in order to obtain that result. For these reasons, here we present DDCalc: a new, general solution for recasting direct search limits. DD-Calc is released and maintained as a standalone backend code by the GAMBIT Dark Matter Workgroup. It can be obtained from http://ddcalc.hepforge.org under an academic use license. A development version of the code including only the first run of the LUX experiment was previously released as LUXCalc [65], and has been used in a number of analyses (e.g. [66][67][68]).
Yet another issue is that the limits presented by experimental collaborations almost always assume Eq. 20, i.e. that the scattering matrix element has no explicit velocity or momentum dependence. Many DM models involve non-trivial momentum-or velocity-dependences in their cross-sections. This means that extra velocity factors must be incorporated into the integral over the WIMP velocity distribution (Eq. 18), and extra momenta must be included in the final integral over the differential rate when calculating the total event yield (Eq. 19). Furthermore, these models often probe properties of the target nuclei not captured by the standard SI and SD nuclear form factors. Although the first release of DDCalc does not include such generalised couplings out of the box, its structure is designed to easily accommodate them (and they will be explicitly included in a future release).

Methods
DDCalc calculates predicted signal rates and likelihoods for various experiments, given input WIMP and halo models.
A DDCalc WIMP model consists of the DM mass and the four couplings f p , f n , a p and a n , specifiable also directly as SI/SD proton/neutron cross-sections. DDCalc does not deal directly with nuclear uncertainties; users (or GAMBIT as the case may be) are expected to vary these externally in the calculation of the couplings. Momentum-dependent couplings can be implemented by adding the requisite additional power of q to the integrand of Eq. 19, as implemented in the source file DDRates.f90. Similarly, velocity-dependent cross-sections can be implemented in the integrand of Eq. 18, found in DDHalo.f90. Some more explicit tips for the brave are provided in the DDCalc README. SI form factors in the first release default to Helm (Eq. 23). SD form factors are included for 19 F, 23 Na, 27 Al, 29 Si, 73 Ge, 127 I, 129 Xe and 131 Xe from Ref. [69]. Alternative form factors can be encoded in DDNuclear.f90. The local halo model consists of a constant local density and a truncated Maxwell-Boltzmann velocity distribution (Eq. 5). The most-probable speed v 0 , truncation speed v esc , local speed relative to the halo v obs and local density ρ χ are all individually configurable; v obs can also be constructed automatically from the local standard of rest v LSR or disk rotation speed v loc , along with the Sun's peculiar velocity v ,pec relative to it.
A particular set of WIMP and halo model parameters will produce a set of predicted yields in various direct search experiments. Having calculated the expected number of signal events N p,i for the ith experimental analysis region (Eq. 19), DDCalc calculates a Poisson likelihood for the model as where N o,i is the number of observed events in the analysis region, and b i is the expected number of background events in that region. Note that a single experiment may comprise more than one analysis region (for example bins in reconstructed energy). Unbinned analyses can in principle also be implemented, provided sufficient information from the experiment is available. The likelihoods for each experiment and analysis region, {L i } can then be combined to form a composite direct search likelihood, which can itself be used in combination with other likelihood terms from other experiments. Some direct detection experiments do not provide explicit background estimates or prefer not to perform a background subtraction in order to test the WIMP hypothesis. In this case, b i should be set to the value that maximises the likelihood: This leads to a one-sided likelihood, i.e. a non-zero WIMP signal can only be disfavoured but not preferred relative to the background-only hypothesis. The likelihood functions can be used directly to calculate constraints in the σ-m χ plane. A parameter point is considered to be excluded at 90% confidence level if where L(σ = 0) denotes the likelihood of the backgroundonly hypothesis. Alternatively, one can also use DDCalc to obtain constraints in the σ, m χ plane using one of two p-value methods:

Feldman-Cousins
DDCalc implements the standard Feldman-Cousins method [70] for generation of one-or two-sided confidence intervals, based on the Poisson likelihood in Eq. 28. Note that the likelihood in this case uses the total expected signal and background yields across the entire analysis region, and hence does not incorporate spectral information that might lead to a stronger result.

Maximum gap
Yellin's maximum gap method [71] was proposed as a way of handling spectral information in the case that the magnitude and shape of the background are unknown and a background subtraction is therefore not possible.  The method assumes that all of the observed events could in principle be signal events, leading to a conservative exclusion limit on the WIMP scattering crosssection. Nevertheless, exploiting the spectral information of the observed events will typically yield stronger results than the Feldman-Cousins method; see [65] for an example.
The idea of the maximum gap method is to break the signal region into a number of intervals bounded by the observed events. Given an efficiency functions φ k (E) for each of these intervals, one can then use Eq. 19 to calculate the expected number of events between any two observed events. The "maximum gap" is the interval where this number is largest. By calculating the probability that a gap as large as the observed one could arise from random fluctuations it is then possible to quantify the p-value of the assumed model [71].

Experiments
Event rate calculations in DDCalc rely on the availability of experimental response functions φ(E), the predicted number of background events in an analysis b, the total number of observed events N o , and the experimental exposure M T . Version 1.0.0 of DDCalc ships with this data already implemented for eight experimental analyses, shown in Table 2 (see also Fig. 8 for a comparison of our analyses with the published bounds).
The LUX [75][76][77], PandaX [78], XENON100 [72] and SuperCDMS [73] analyses are most useful for constraining SI scattering and SD scattering on neutrons, whereas the PICO-2L [80], PICO-60 [79] and SIMPLE [74] analyses provide good sensitivity to SD scattering on protons. In the following we provide additional details on the implementation of the experimental details. SIMPLE. We implement the efficiency curve φ(E) directly as described in Ref.
[74]. We do not include the contribution from carbon, due to its high threshold

DDCalc_run
Calculates the expected number of signal events in the analysis region, the likelihood of the WIMP parameters, and the p-value. The logarithm of the latter two are given.

DDCalc_run --log-likelihood
Calculates the logarithm of the Poisson-based likelihood for the specified WIMP parameters.

DDCalc_run --log-pvalue
Calculates the logarithm of the p-value for the specified WIMP parameters.
If the analysis includes interval information, this returns the maximum gap p value. If the analysis does not include interval information, the p-value calculated from the Poisson likelihood but without background subtraction (i.e. setting b = 0) to give a conservative upper bound.

DDCalc_run --spectrum
Tabulates the raw recoil spectrum dR dE for the detector material and given WIMP parameters, by energy. The tabulation of the energies can be modified using the --E-tabulation option.

DDCalc_run --events-by-mass
Tabulates the expected signal events for fixed WIMP-nucleon cross-sections, by WIMP mass. The tabulation of masses can be modified using the --m-tabulation option.

DDCalc_run --constraints-SI
Tabulates the cross-section lower and upper constraints in the spinindependent case, by mass. Constraints are 1D confidence intervals at each mass, determined using a Poisson likelihood with signal plus background and Feldman-Cousins ordering. The confidence level is given using the --confidence-level option, with the default set to 0.9 (90% CL). The ratio of the WIMP-neutron and WIMP-proton couplings is held fixed, with the ratio defined by an angle θ such that tan θ = Gn/Gp. The angle can be specified via the --theta-SI option or, more conveniently, given in units of π via --theta-SI-pi. The default is Gp = Gn (θ = π/4). The tabulation of the masses can be modified using the --m-tabulation option. DDCalc_run --constraints-SD As above, but for the spin-dependent case.

DDCalc_run --limits-SI
Tabulates the cross-section upper limits in the spin-independent case, by mass. Limits are determined using the maximum gap p-value method.
Excluded p-values are given using the --p or --lnp options, with the default being p = 0.10 (90% CL exclusion limits). The ratio of the WIMPneutron and WIMP-proton couplings is held fixed, with the ratio defined by an angle θ such that tan θ = Gn/Gp. The angle can be specified via the --theta-SI option or, more conveniently, given in units of π via --theta-SI-pi. The default is Gp = Gn (θ = π/4).The tabulation of the masses can be modified using the --m-tabulation option. DDCalc_run --limits-SD As above, but for the spin-dependent case. energy for nuclear recoils. SIMPLE expected 12.7 events and observed 8.
PICO-2L. The efficiency curve φ(E) for flourine is provided in Ref. [83]. Again, we do not include the contribution from carbon. The background expectation of 1.0 events agrees well with the one event observed.

PICO-60.
This experiment has a time-dependent energy threshold, so the efficiency curve φ(E) is obtained by convolving the exposure as a function of threshold with the (appropriately rescaled) efficiency curve for fixed threshold. The total exposure is reduced by a trial factor of 1.8 as proposed by the collaboration. Because the efficiency curves for fluorine and iodine differ, we implement the two target materials as independent experiments. This is possible only because PICO-60 has observed no signal events and does not perform a background subtraction, in which case the likelihood reduces to L = e −Np , so that the contributions for the individ-ual target nuclei factorise. Again, we do not include the contribution from carbon. SuperCDMS. We implement two different efficiency curves including gap information, with nuclear recoil energies converted from phonon energies assuming the Lindhard presecription [61]. The first is based directly on the published efficiencies and event energies of all detectors included in the experimental run [73]. During this run, the ionisation guard of one detector (T5Z3) was inoperative, allowing additional background events to enter the analysis region and reduce the overall sensitivity of the experiment. We therefore implement a second efficiency curve and corresponding set of analysis parameters, where the T5Z3 detector is excluded from the analysis. 6 This alternative parameterisation is the default within DDCalc for this analysis.

--help
Displays help information.

--verbosity
Sets the verbosity level (a higher level gives more output). The data output at different levels is mode-specific. Selection of verbosity level 0 is also available via the flag --quiet; --verbose gives verbosity level 2.
Mode-specific options Specifies the tabulation for the recoil energy E. Energies will be tabulated from <min> to <max> inclusive, with N logarithmically spaced intervals between tabulation points. A negative <N> value indicates |<N>| points per decade. The <N> argument is optional.

--interactive
Turns on the interactive mode, which will prompt the user for the WIMP parameters.
Specifies the tabulation for the WIMP mass m. Masses will be tabulated from <min> to <max> inclusive, with N intervals between tabulation points. The points will be logarithmically spaced unless the fourth commaseparated value is F or 0, in which case the spacing will be linear. A negative <N> value indicates |<N>| points per decade in the logarithmic case. The <N> and <log> arguments are optional.
Fixes the ratio of the WIMP-nucleon couplings to tan θ = Gp/Gn in the spin-dependent case. This option is only used in run modes where the absolute couplings cannot be specified. The similar option --theta-SD-pi allows the ratio to be specified in units of π. --theta-SI=<val> [π/4] As above, but for the spin-independent case.
Statistical options The confidence level to use in determining constraints/limits.
The confidence level corresponding to the fraction of the normal distribution within the given number of standard deviations (symmetric). That is, a value of 3 here would result in a 3σ CL constraint/limit. Equivalent to The p-value to use in determining constraints/limits. The logarithm of the p-value can be specified via log-p-value.
The local dark matter density (in units of GeV cm −3 ).

--v0=<val> [vrot]
The most probable speed of the Maxwell-Boltzmann distribution (in km s −1 ). Related to the rms speed by vrms = 3/2v0 and the mean speed byv = 4/πv0. For the isothermal sphere of the Standard Halo Model, this is equal to the galactic circular velocity (disk rotation speed), so this is set equal to the value supplied with --vrot (or its default) unless explicitly set here.
The local Galactic escape speed (in km s −1 ). This is used to truncate the Maxwell-Boltzmann distribution as the highest speed particles should be depleted due to escape from the Galactic potential.
The Local Standard of Rest (in km s −1 ), i.e. the velocity of the Galactic disk relative to the Galactic rest frame in Galactic coordinates U V W , where U is the anti-radial direction (towards the Galactic centre), V is the direction of disk rotation, and W is in the direction of the galactic pole. Set to (0,--vrot,0) unless specified here.
The speed of the observer (detector) relative to the galactic rest frame (in km s −1 ), where the galactic rest frame is the frame in which the dark matter exhibits no bulk motion. This is set equal to the magnitude of --vsun, unless explicitly set here. --vpec=<val>,<val>,<val> [11,12,7] The velocity of the Sun relative to the Local Standard of Rest in galactic coordinates U V W .

--vrot=<val> [235]
The local disk rotation speed (in km s −1 ). PandaX. We implement the efficiency curve φ(E) provided by the collaboration [78], with an additional factor of 2 to account for the fact that only events below the mean of the nuclear recoil band are considered. Three events were observed in this search window compared to a background expectation of 4.8.
XENON100. While XENON100 does provide efficiency curves as a function of the nuclear recoil energy [72], this information is insufficient to make use of the spectral information, i.e. the energy of the observed events. This spectral information is however very helpful, because events at high recoil energies (E > 30 keV) as well as events close to the threshold have a higher probability to result from backgrounds than from WIMP scattering. To be able to use this spectral information (for example in the context of the maximum gap method), we have simulated the detector using the Time Projection Chamber Monte Carlo (TPCMC) code [84], which in turn relies on NEST for modelling the physics of recoiling heavy nuclei [85]. The TPCMC output is included in the public release of DDCalc, and provides the full gap information for XENON100.

LUX.
The TPCMC code has also been used to obtain the efficiencies for the first run of LUX [75] and these are included in DDCalc. For the reanalysis of the first run [76], which improves the sensitivity to low-mass WIMPs, only the total efficiency curve provided by the collaboration is included (again scaled down by a factor of 2). In its second run, LUX saw a total of six events (compared to a background expectation of 3.5 events). This makes the use of spectral information for both signal and background an important part of the analysis. The maximum gap method is not suitable for this task, as it includes only the spectral information of the signal but treats the background as unknown. To approximately reproduce the LUX analysis, we exclude those parts of the S1-S2 plane where events are likely to have arisen from the background. Specifically we impose the requirement 3 phe ≤ S1 ≤ 33 phe as well as an S2 signal below the mean of the nuclear recoil band. We calculate the efficiency curve φ(E) for this search window using similar methods as for the first LUX run. To estimate the expected background, we assume that backgrounds due to leakage from the electron recoil band are equally distributed in S1, while all other backgrounds have a shape that resembles neutron recoils from calibration data. Using these assumptions we predict 2.3 background events in the reduced search window, whereas LUX observed one event. While the definition of this reduced search window contains some degree of arbitrariness and the background estimation is rather crude, this approach enables us to reproduce the published LUX bound to good accuracy (see Fig. 8).
Once the LUX collaboration has released more details on The average expected number of background events in the analysis region.
The number of observed events in the analysis region.
The detector's exposure time (in days).
Sets the isotopic composition to the naturally occurring isotopes of the com---stoichiometry=<N1>,<N2>,<N3>,... pound with the given set of elements and stoichiometry. For example, CF3Cl would have Z=6,9,17 and a stoichiometry 1,3,1. If the stoichiometry is not given, it is assumed to be 1 for each element.
Detector efficiency options Applies a minimum energy threshold for rate calculations (in keV). This effectively takes: is the Heaviside step function. This allows easy removal of low-energy scattering from the signal calculations without having to modify efficiency input files.

--file=<file>
A file containing the efficiency φ(E) tabulated by energy, where φ(E) is defined as the fraction of events at an energy E that will be observed in the analysis region after factoring in trigger efficiencies, energy resolution, data cuts, etc. The first column is taken to be the energy (in keV). The next column should contain only numbers between 0 and 1; this is taken to be φ(E); note that this allows TPCMC output to be used, as columns of <S1> and <S2> will be safely ignored. Blank lines and lines beginning with a hash character will be ignored. If the analysis observed any events, the file can optionally include additional columns beyond the φ(E) column representing the efficiencies φ k (E) for detecting events in the sub-intervals for use with the maximum gap method.

--no-intervals
Disables the use of sub-interval calculations, even if efficiencies for the sub-intervals are available. This is implied by certain program modes where the sub-intervals cannot be used. the analysis, the run 2 implementation can be updated accordingly.
In version 1.1.0 of DDCalc two additional experimental analyses have been implemented. The Xenon1T experiment [81] now gives the world-leading limit on SI scattering and SD scattering on neutrons, whereas the 2017 run of PICO-60 [82] analysis provides the strongest constraints on SD scattering on protons.
Xenon1T. To implement the first results from Xenon1T [81], we consider events with 3 phe ≥ S1 ≥ 70 phe and an S2 signal below the mean of the nuclear recoil band. We calculate the corresponding acceptance function by simulating fluctuations in the S1 and S2 signal. For this purpose we take the scintillation and ionization yields from [76] and determine the detector response from a fit to the nuclear recoil band in [81]. No events were observed in the signal region, compared to an expected background of 0.36 events.

PICO-60 (2017).
The most recent results from PICO-60 make use of the same target material as previously employed in PICO-2L [80]. We therefore use the same acceptance function for scattering on fluorine as described above and again neglect the contribution from carbon. We include an additional factor of 0.851 to account for the selection efficiency for single scatters. No such events have been observed and we do not perform a background subtraction.
We emphasise that the accurate implementation of likelihood functions for WIMP masses below 10 GeV is very challenging, as the experimental sensitivity results largely from upward fluctuations and is furthermore very sensitive to astrophysical uncertainties. A precision study of the experimental constraints on low-mass WIMPs will likely require the implementation of additional experimental information in order to refine the likelihood functions.

Command-line usage
By default, compiling DDCalc produces static and shared libraries as well as a default executable DDCalc_run. This latter can be run via either a command-line interface, or via an interactive mode that is entered automatically if the user supplies no arguments to the executable.
A complete list of arguments for DDCalc_run can be obtained with:

DDCalc_run --help
The command-line signature of the program is:

DDCalc_run [mode] [options] [WIMP parameters]
The mode flag switches between a variety of run modes, summarised in Table 3. WIMP parameters is a list of arguments that can take one of four forms (in units of GeV for m, and pb for the cross-sections): m m sigmaSI m sigmaSI sigmaSD m sigmapSI sigmanSI sigmapSD sigmanSD In the first case, all WIMP-nucleon cross-sections are set to 1 pb. In the second case, only spin-independent couplings are turned on, and sigmaSI and sigmaSD are used as common cross-sections for both WIMP-proton and WIMP-neutron interactions (in the SI and SD cases, respectively). Negative cross-sections can be given in order to indicate that the corresponding coupling should be negative (the actual cross-section will be taken as the magnitude of the value supplied).
Various general options for DDCalc are summarised in Table 4. In Table 5, we list the options that can be used to change the dark matter halo distribution.
The DDCalc package includes a number of advanced detector options for defining the precise isotopic composition of a detector, and the efficiency functions. These are listed in Table 6, but should not be necessary for the general user, as the existing analysis flags set the defaults correctly for the experimental analyses that are included in the release.

Library interface (API)
For reference, here we include a summary of the main DDCalc functions. These can be accessed from a Fortran calling program with USE DDCalc and from a C/C++ program with #include "DDCalc.hpp" Usage of these functions by GAMBIT is documented in the following subsection.

Derived types
DDCalc defines three types of object. These are the bedrock of the code; almost every calculation must be provided with an instance of each of these to do its job. Sets the internal parameters of the WIMP object.
Here m is m χ in GeV, fp and fn are f p and f n in GeV −2 , ap and an are (dimensionless) a p and a n , the G parameters are G p SI , G n SI , G p SD and G n SD , and the sigma parameters are σ SI,p , σ SI,n , σ SD,p and σ SD,n in pb. Negative cross-sections indicate that the corresponding coupling should be negative. In all cases, 'p' refers to proton and 'n' to neutron. As per SetWIMP, but retrieves the WIMP parameters from WIMP. The only difference here is that returned cross-sections are always positive, regardless of the sign of the corresponding coupling. Sets the internal parameters of the Halo object. Here rho is ρ χ in GeV cm −3 , vrot is v rot in km s −1 , v0 is v 0 in km s −1 , and vesc is v esc in km s −1 .

TYPE(DetectorStruct)FUNCTION DDCalc_InitDetector(intervals) analysis_name_Init(intervals)
The first of these functions initialises an object carrying the information about the default experimental analysis; in DDCalc 1.0.0 this is the LUX 2013 analysis [75]. Here intervals is a flag indicating whether calculations should be performed for intervals between observed events or not. This is only necessary for maximum gap calculations and can be set to FALSE for likelihood analyses. Non-default analyses can be obtained with the specific functions listed second, where analysis_name is one of the analyses given in Ta Manually sets the minimum nuclear recoil energy discernible by a given Detector to Emin, in keV. The default value is 0, meaning that the detector response is determined by the pre-calculated efficiency curves, which account for detector and analysis thresholds. Setting Emin to a non-zero value allows to obtain more conservative exclusion limits or to study the dependence of the experimental results on the assumed low-energy cut-off. Uses the maximum gap method if Detector was initialised with intervals = TRUE and the analysis contains the necessary interval information to allow such a method; otherwise uses a Poisson distribution in the number of observed events, assuming zero background contribution (Eq. 28 with b = 0). 6. Factor by which the WIMP cross-sections must be multiplied to achieve a given p-value: Calculates the factor x by which the crosssections must be scaled (σ → xσ) to achieve the desired p-value, given via lnp as ln(p). See DDCalc_LogPValue above for a description of the statistics.

C/C++ interface
For ease of use in linking to these routines from C/C++ code, a second (wrapper) version of each of the interface routines described above is defined within a C namespace DDCalc. These use C-compatible types only, and access interfaces to the main DDCalc library through explicitly-specified symbol names, to get around name-mangling inconsistencies between different compilers when dealing with Fortran modules. These functions work just like the ones above, but neither accept nor return WIMPStruct, HaloStruct nor DetectorStruct objects directly. Instead, they return and accept integers corresponding to entries in an internal array of Fortran objects held in trust for the C/C++ calling program by DDCalc. The routines can be used to delete the objects held internally by DDCalc.

WIMP-nucleon couplings
As shown in Table A1, GAMBIT contains three functions capable of calculating effective WIMP-nucleon couplings: DD_couplings_DarkSUSY, DD_couplings_MicrOmegas, and DD_couplings_SingletDM. Each has capability DD_couplings and returns a DM_nucleon_couplings object, which contains the parameters G p SI , G n SI , G p SD and G n SD . DD_couplings_DarkSUSY calculates these couplings for a generic MSSM model using DarkSUSY, while DD_couplings_SingletDM calculates them for the scalar singlet model internally. DD_couplings_MicrOmegas calculates the couplings using micrOMEGAs for either model -the appropriate version of the micrOMEGAs backend is chosen by GAMBIT depending on which model is being scanned.
For the MSSM DD_couplings_DarkSUSY and DD_couplings_MicrOmegas compute the couplings by passing MSSM Spectrum information and nuclear parameters (described in more detail in the next section) to the external codes. By default, DarkSUSY does not take into account loop corrections to the DM-nucleon scattering process, aside of course from the one-loop coupling of the Higgs to gluons via the triangle diagram involving heavy quarks. On the other hand, certain classes of one-loop corrections are taken into account by default when using micrOMEGAs, such as SUSY QCD corrections to the Higgs-exchange diagrams or box diagrams involving external gluons (for details we refer the reader to [48]). Similar corrections have been implemented in DarkSUSY (see [86] for details), and in GAMBIT 1.1.0 we enable these by default. If the user instead wishes to use the tree level DarkSUSY cross sections, they can do so by setting the YAML option loop (default: true) for the function DD_couplings_DarkSUSY to false. Also added to this function is the option pole (default: false), which, when set to false, causes DarkSUSY to approximate the squark propagators in the calculation of the SI and SD couplings as 1/m 2 q to avoid poles (this only applies when loop is false, otherwise this option is ignored). A final change in GAMBIT 1.1.0 is addition of the box option (default: true) to DD_couplings_MicrOmegas, which determines whether the box diagrams for DM-gluon scattering are calculated for MSSM-like models.
In the case of the scalar singlet model, DD_couplings_SingletDM calculates the effective Higgsnucleon coupling f N internally, using the nuclear matrix elements relevant for SI scattering, and combines it with the scalar mass and Higgs-portal coupling as described in Ref. [87] to determine the effective SI couplings (there is no SD scattering in this model). DD_couplings_MicrOmegas uses micrOMEGAs with our CalcHEP implementation of the scalar singlet model for a similar calculation.

Nuclear uncertainties
When the interaction between DM and quarks can be described by a scalar operator, the spin independent effective couplings G p SI and G n SI depend heavily on both the sea and valence quark content of the proton and neutron respectively. These are parameterised by the 6 nuclear matrix elements where N ∈ {p, n} and q ∈ {u, d, s}. The equivalent quantities for the heavy quarks Q ∈ {c, b, t} are related to these parameters via the formula [87,88] f (N ) The 6 lighter quark matrix elements are part of the nuclear_params_fnq model in GAMBIT. They can be calculated from other quantities more closely related to lattice and experimental results. These are often the light and strange quark contents of the nucleon, defined as where m l ≡ (1/2)(m u +m d ). In the nuclear_params_ sigmas_sigmal model, these 2 parameters replace the 6 f (N ) Tq values, with the conversion between the two parameter sets described in Ref. [87]. An equivalent parameterisation replaces σ s with the parameter σ 0 : By comparing the forms of the above equations, we see that σ 0 and σ l are related by the formula The GAMBIT model nuclear_params_sigma0_ sigmal contains σ 0 and σ l . For the spin-dependent effective couplings G p SD and G n SD , the relevant nuclear parameters are ∆ (N ) q , which describe the spin content of the nucleons, where again N ∈ {p, n} and q ∈ {u, d, s}. Here there are only three parameters since the values for the proton and neutron are directly related: ∆ s . All of the nuclear_params models contain the three ∆ (p) q . All of the nuclear parameters are set in the appropriate backend when calculating the WIMP-nucleon couplings using the functions DD_couplings_DarkSUSY and DD_couplings_MicrOmegas. DD_couplings_SingletDM, which does not make use of an external backend, also uses the nuclear parameters in its calculation of the couplings.
A combined likelihood for σ s and σ l is implemented in DarkBit as the capability lnL_SI_nuclear_parameters. This capability is provided by the function lnL_sigmas_sigmal (Table A6), in which the two likelihoods take the form of simple Gaussian distributions (Eq. 10), with the default values σ s = 43 ± 8 MeV, based on a global fit of lattice calculations [89], and σ l = 58 ± 9 MeV. This last value is based on the range of recent determinations of σ l in the literature from analyses of pion-nucleon scattering data [90][91][92]. σ l has also been extracted from lattice QCD results [93][94][95][96], and the more recent analyses based on this approach point to a lower preferred value of σ l ∼ 40 MeV.
For the ∆ (N ) q , we provide likelihoods for the following 2 combinations of parameters: and ∆ (p) s itself. A combined likelihood for all of these parameters is given by the capability lnL_SD_nuclear_parameters, which can currently only be fulfilled by the function lnL_deltaq (Table A6). The likelihoods again take the form of Gaussian distributions, with a 3 = 1.2723 ± 0.0023, determined via analysis of measurements of neutron β decays [97], and a 8 = 0.585 ± 0.023 based on hyperon β decay results [98]. ∆ (p) s = −0.09 ± 0.03, based on a a measurement of the spin-dependent structure function of the deuteron from the COMPASS fixed target experiment [99].
For both the SI and SD nuclear parameters, the central values and widths of the Gaussians can be adjusted with the YAML options param_obs and param_obserr respectively, where param refers to the parameter name (see Table A6 for the list of options).
At the beginning of a scan, GAMBIT creates DD-Calc WIMP and Halo objects, as well as a separate Detector object for each supported experimental analysis (see Sec. 5.2.4 for explanation of these classes). For each point in a scan, it updates the WIMP object with newly-calculated nuclear couplings from DarkBit, and the Halo object with any new halo parameters. DarkBit provides a series of 'getter' routines for different quantities that the user may be interested in, for each supported analysis: the observed and predicted number of events, the predicted number of background and signal events (broken down into SI and SD components if desired), and the final Poissonian log-likelihood for the given model and experiment. These functions are detailed in Table A6. Each of them depends on a single calculation routine, which passes the WIMP, Halo and relevant Detector object (or rather, their internal indices) to DDCalc's own master DDCalc_CalcRates function (cf. Sec. 5.2.4). That function then computes the rates and likelihoods for the given analysis and model point, and provides them to the getter functions.

Background
Besides collider and direct searches, the third traditional way of looking for DM is to test the particle hypothesis in situ, by identifying the (Standard Model) products that result from DM annihilation or decay at places with large DM densities. Locally, the injection rate of a Standard Model particle type f , per volume and energy, is given by Here, σ i is the annihilation cross section into final state i, v the relative velocity of the annihilating DM particle pairs, ... denotes an ensemble average over the DM velocities, and dN i /dE f is the (differential) number of particles f that result per annihilation into final state i. The dark matter mass density is given by ρ χ and its particle mass by m χ . N χ depends on the nature of the DM particle, e.g. N χ = 2 for Majorana fermions, and N χ = 4 for Dirac fermions. For decaying DM, one simply would have to replace σ i v ρ 2 χ /N f → m χ Γ i ρ χ in the above expression, where Γ i is the partial decay width.
The yields (i.e. the number and energy distribution of final state particles) are typically not significantly affected by the ensemble average, allowing to write σ i v dN i /dE f = σ i v dN i /dE f (and correspondingly for the decaying case), and therefore to tabulate dN i /dE f | v=0 for a pre-defined set of centre-of-mass energies for e.g. annihilation into quarks (given by the DM mass for highly non-relativistic DM). Interpolating between these tables rather than running event generators such as Pythia [100] for every model point constitutes a significant gain in performance.
The DM density enters quadratically into Eq. 39. This implies that substructures in the DM distribution (usually in form of subhalos within larger halos) in general enhance the observable annihilation flux significantly with respect to what one would expect in absence of substructures. In practice, one hence often replaces the DM density squared as follows where ρ 2 χ,nosub (x) is the DM distribution smoothed over substructures (describing the general distribution of DM in the main halo), whereas B χ (x) is the boost factor that parameterises the enhancement due to substructure.
Once produced, those particles f then propagate through a often significant part of the Galaxy, before they reach the observer. The details of this process depend strongly on the type of messenger, as well as on the type of the source. Gamma rays (Section 6.1.1) play a pronounced role in this context as they propagate completely unperturbed through the Galaxy, for energies below a few TeV, thus offering distinct spatial and spectral features to look for [101]. While much harder to detect, this property is shared by neutrinos (Section 6.1.2); they are furthermore unique in that they can easily leave even very dense environments and hence are the only probes of expected high DM concentrations in celestial bodies like the Sun and the Earth [102].
Charged cosmic ray particles, on the other hand, are deflected by randomly distributed inhomogeneities in the Galactic magnetic field such that (almost) all directional information is lost. In particular, antiprotons provide a powerful tool to constrain DM annihilating or decaying to hadronic channels [103][104][105], while cosmicray positron data strongly constrain leptonic channels [106]. While charged cosmic rays are not included in this first release of DarkBit, the implementation of both propagation and relevant experimental likelihoods for these channels is high on the priority list for planned extensions of the code (see Section 8).

Gamma rays
Gamma-ray spectra from dark matter annihilation or decay can be broadly separated in two components (see Ref. [107] for a review). (1) Continuous gammaray spectra are generated in annihilations into quarks, massive gauge bosons and τ -leptons. The gamma-ray photons come here mostly from the decay of neutral pions (π 0 → γγ), which are produced in fragmentation and hadronisation processes. Most of the gamma-ray energy is deposited into photons with energies about an order of magnitude below the dark matter mass. (2) Prompt photons are directly produced in the hard part of the annihilation process and lead to much sharper features in the gamma-ray spectrum, typically with energies close to the DM mass. The most extreme example is the annihilation into photon pairs (χχ → γγ), which gives rise to mono-energetic gamma rays [108]. Virtual internal bremsstrahlung [109] or box-like features from cascade-decays [110] can also play a significant role. Such sharp spectral features are usually much simpler to discern from astrophysical backgrounds, and hence play a central role in indirect dark matter searches.
Various target objects are interesting for dark matter searches with gamma rays. The predicted annihilation flux is largest from the centre of the Milky Way. However, the diffuse gamma-ray emission caused by cosmic rays along the line-of-sight towards the Galactic bulge makes detecting a signal from the Galactic centre subject to large systematic uncertainties. Simpler and basically background-free targets are dwarf spheroidal galaxies, which are dark matter-dominated satellite galaxies of the Milky Way. We will discuss the results from various targets and instruments that we use in DarkBit in Sec. 6.2.2.
In all cases, the morphology and intensity of the gamma-ray annihilation signal depends on the spatial distribution and clumping of dark matter in the target object, according to Eq. 40. For the various likelihoods that we discuss below, in most cases the user can choose to either employ the spatial distribution of dark matter in the Milky Way as defined in the halo model used in the corresponding scan (see Sec. 3), or to make the same assumptions on the target halos as in the corresponding publications from which the likelihoods were extracted. In both cases, we neglect the substructure boost, which can be of O(1) for reasonable assumptions [111]. It is however straightforward to change the halo properties, if so desired.

Neutrinos
Like other indirect probes, searches for high-energy astrophysical neutrinos can be used to constrain the DM annihilation cross-section, by looking in directions with high DM densities such as the Galactic centre and dwarf galaxies. Unlike other indrect searches however, neutrinos can also probe nuclear scattering cross-sections. This is because a cross-section with nuclei implies that DM can scatter on nuclear matter in stars and other compact objects, losing sufficient energy to become gravitationally bound to the object's potential well [112][113][114][115]. Being on a bound orbit, the DM then returns to scatter repeatedly, eventually settling to an equilibrated, concentrated distribution within the stellar body. If it is of a variety that is able to annihilate, DM therefore does so in the stellar core, producing high-energy annihilation products. Many of these products are stopped very quickly by interactions with nuclei, forming unstable intermediaries such as B mesons, which go on to decay to other SM particles including high-energy neutrinos [116].
Neutrinos produced this way, and any produced directly in the annihilation, are able to escape the stellar body because they interact so weakly with nuclei. They can then propagate to Earth, and be detected by neutrino telescopes. The most promising target for this type of search by far is the Sun, owing to its proximity and deep potential well, allowing it to accumulate large amounts of DM. 7 At present, neutrino telescope limits on the total DM annihilation cross-section are weak [124,125], and cannot compete with gamma rays. In DarkBit we therefore currently focus exclusively on solar searches for high-energy neutrinos, and their implications for annihilating DM models with significant nuclear scattering cross-sections. Here 'high energy' means GeV-scale neutrino energies; MeV-scale neutrinos are more difficult to distinguish from regular solar (fusion) neutrinos, and neutrinos with energies in the TeV range and above suffer from significant nuclear opacities in the Sun, making 7 We point out that such a population of DM in the Sun and other stars can have a raft of other observable consequences beyond high-energy neutrinos, which can be highly relevant for some models [117][118][119][120][121][122][123]; these are slated for inclusion in future releases of DarkBit. their escape difficult and substantially lowering their fluxes at Earth.
For scattering cross-sections that do not depend on the relative velocity or momentum transfer in the DMnucleus system, the capture rate in the Sun is given by [114] where r is the height from the solar centre, u is the DM velocity before being influenced by the Sun's gravitational potential, and f (u) is the distribution of u in the Sun's rest frame. The quantity Ω − v (w) is the scattering rate of DM particles from velocity w to less than v, where v is the local escape velocity at height r in the Sun, and w = w(u, r, t) ≡ u 2 + v(r, t) 2 is the velocity an infalling DM particle obtains by the time it collides with a nucleus at height r. Ω − v (w) thus describes the local capture rate at height r in the Sun, from the part of the velocity distribution corresponding to incoming velocity u. The total population N χ of DM in the Sun can then be determined at any point in its lifetime by solving the differential equation where A and E are the annihilation and evaporation rates, respectively. Except where DM is lighter than a few GeV, E is generally negligible [126][127][128]. Assuming E = 0, and that C and A are constant, the solution to Eq. 42 approaches a steady state on characteristic timescale t χ , the equilibration time between capture and annihilation. When this steady state is reached, the rate-limiting step in the whole process is capture rather than annihilation. In this regime, the annihilation rate is identical to the capture rate, and the annihilation cross-section has no further bearing upon the number of neutrinos coming from the Sun. Many previous analyses have assumed that capture and annihilation are in equilibrium in the Sun, so that the annihilation rate can be obtained directly from the capture rate; in DarkBit we instead solve Eq. 42 and determine N χ explicitly for each each model. Knowing the DM population (and therefore annihilation rate) by solving Eq. 42, the annihilation branching fractions can then be used to determine the spectra of high-energy particles injected into the Sun, on a model-by-model basis. The stopping of these annihilation products, ensuing particle production and decay, and subsequent propagation and oscillation of neutrinos through the Sun, space and Earth, have been studied by extensive Monte Carlo simulations [116,129]. The resulting neutrino yield tables at Earth are included in DarkSUSY [15], PPPC4DMID [42] and micrOMEGAs [16]. In DarkBit, the canonical method to obtain these fluxes is to compute the capture and annihilation rates using DarkSUSY, and to then obtain neutrino yields at Earth from the WimpSim [130] tables contained therein (although getting the same from PPPC4DMID or mi-crOMEGAs would also be straightforward).
Although SI scattering of DM by e.g. oxygen, helium and iron can dominate the capture rate for some models, the differing strength of direct limits on SI and SD scattering cross-sections, and the fact that hydrogen possesses nuclear spin, mean that typically, solar neutrinos are most interesting for SD scattering.
Neutrino telescopes are presently responsible for the strongest limits on the SD scattering cross-section with protons, with IceCube providing the tightest limits above masses of ∼100 GeV [17,131], Super Kamiokande (Super-K) dominating at lower masses [132], and ANTARES and Baksan providing weaker constraints. We implement the IceCube search likelihood on a model-by-model basis, using the nulike package [17,133] to compute the likelihood function for each model, given its predicted neutrino spectrum at Earth. Nulike computes a fully unbinned likelihood based on the event-level energy and angular information contained in the three independent event selections of the 79-string IceCube dataset [134]. We do not implement a Super-K likelihood for now, as a) unlike IceCube, Super-K have not publicly released their event data, b) IceCube does have some sensitivity at low mass, and c) Super-K data only become more constraining for relatively light DM particles (at least in the context of SUSY and scalar singlet models, given other constraints).

Overview
Constraints on dark matter annihilation come from gamma-ray observations of various targets using various instruments. The experimental collaborations usually present their results as constraints on particular annihilation channels, using particular dark matter profiles. This makes the limits not only often difficult to compare, but also makes it hard to directly use the experimental results in scans over dark matter models with complex final states. In order to simplify and unify the adoption of gamma-ray indirect detection results in global scans and beyond, we present gamLike. 8 The gamLike code is released under the terms of the MIT license 9 and maintained as a standalone backend by the GAMBIT Dark Matter Workgroup. It can be obtained from http://gamlike.hepforge.org.
The present version of gamLike ships with the likelihood functions discussed in Sec. 6.1.1, which are also listed in Table 7. It is written in C++ and can be linked either as shared library (this is how it is used in GAM-BIT), or just as a static library. All experimental likelihoods are called in the same way: with a function that takes as its argument a tabulated version of the quantity which is the differential version of Eq. 45. The integration over energy bins happens within gamLike according to the energy bins used in the various analyses. Eq. 43 holds for self-conjugate dark matter particles, but can be easily adapted to e.g. Dirac fermion dark matter by using the appropriate prefactors as discussed in Sec. 6.1. Various options for the so-called J-factors, which describe the effective DM content of the targets, are included in gamLike as well. These make it possible to a marginalise or profile over J-factor uncertainties. The implementation of the combined dwarf limits from Ref. [136], for example, performs a profiling over the J-factors of all 15 adopted dwarfs separately for determining the combined likelihood value. The various implemented treatments are listed in Table 7.

gamLike targets
Dwarf spheroidal galaxies with Fermi-LAT Some of the most stringent and robust limits on the dark matter annihilation cross-section come from the non-observation of gamma-ray emission from dwarf spheroidal Galaxies (dSphs). The most recent and stringent constraints on gamma-ray emission from dSphs from six years of data of the Fermi Large Area Telescope (LAT) were derived in Ref. [136], based on the new Pass 8 event-level analysis. They performed a search for gamma-ray emission from the sources and presented upper limits that in general disfavour s-wave dark matter annihilation into hadronic final states at the thermal rate for dark matter masses below around 100 GeV.
The results from Ref. [136] are available as tabulated binned Poisson likelihoods. 10 The composite likelihood from the dSph analysis is given by   Here N dSph is the number of dwarf spheroidal galaxies included in the analysis, and N ebin is the number of energy bins to be considered. The partial likelihoods L ki depend on the predicted signal flux Φ i · J k . This is a product of the particle physics factor which depends only on the gamma-ray yield per annihilation dN γ /dE and the zero-velocity limit of the annihilation cross-section ((σv) 0 ≡ σv| v→0 ), and the astrophysics factor Here, ∆E i and ∆Ω k denote the energy bin i and solid angle k over which the signal is integrated, m χ is the mass of dark matter particles and ρ χ is the dark matter mass density in the target object. Our knowledge of the distribution of dark matter in dSphs relies on Jeans analyses of the kinematics of member stars. Following Refs. [136,142], to a good approximation the corresponding uncertainties for the J-factors can be modelled by a log-normal distribution, with parameters taken from Ref. [136], and N (x|µ, σ) being a normal distribution with mean µ and standard deviation σ. The halo and gamma-ray likelihoods can be combined by profiling over the nuisance parameters J k . The corresponding profile likelihood is given by An alternative is to marginalise over the nuisance parameters, which yields the marginalised likelihood function The main results from Ref. [136] are derived using the composite profile likelihood for 15 dwarfs, and this is what we implemented in gamLike. Furthermore, for comparison, we also implemented the older results from Ref. [135], which are based on four years of pass 7 data. In both cases, the energy range spans from 500 MeV to 500 GeV. The implemented likelihoods are listed in Table 7.
Note that the effects of energy dispersion are neglected when evaluating the constraints. Given that current constraints from a dedicated line search are much more constraining than the line limits that one can derive from dSph observations, this limitation is of little practical consequence. Implementing Fermi-LAT line search results is high on the priority list for future versions of gamLike (see Sec. 8).

The 'Fermi Galactic centre excess'
Gamma-ray observations of the Galactic centre with the Fermi-LAT identified an extended excess emission at GeV energies, which can be interpreted as a dark matter annihilation signal (see e.g. [138,[143][144][145][146][147][148]). Although the case for millisecond pulsars as explanation for the excess emission was strengthened in recent analyses [149,150], it remains interesting to consider the consequences of various DM explanations.
Ref. [138] presented a spectral characterisation of the GeV excess that included estimates of systematic uncertainties, which were derived from residuals along the Galactic disk. These uncertainties are correlated over the different energy bins. The corresponding likelihood function was approximated to be Gaussian, and has the form (50) where f i denotes the measured flux in energy bin i, Σ ij is the covariance matrix, and J GC denotes the J-factor of the Galactic centre Region-Of-Interest (ROI). The considered energy range spans from 300 MeV to 500 GeV, and the ROI covers Galactic latitudes 20 • > |b| > 2 • and longitudes | | < 20 • . For the J-factors, the default behaviour is to employ the value calculated from the halo model used in the corresponding scan (see Sec. 3). 11 Alternatively, one can choose a fixed J-factor, J = 2.07 × 10 23 GeV 2 cm −5 , corresponding to a contracted NFW profile with inner slope γ = 1.2 (see Ref. [138] for details), or derive a marginalised likelihood function assuming a lognormal distribution of J-factors with mean 1.96 × 10 23 GeV 2 cm −5 and standard deviation of 0.37 (as done in Ref. [139], motivated by Ref. [151]).
Finally, we also included a likelihood function that (on top of astrophysical uncertainties) includes an estimated 10% uncorrelated systematic accounting for possible uncertainties in the modelling of the DM signal spectrum. (This scenario was considered in the work of Ref. [139], and we include it here for completeness.) All the available likelihoods are listed in Table 7.

Galactic centre observations with H.E.S.S.
For dark matter masses above several hundred GeV, the current best limits on dark matter annihilation come from observations of the Galactic centre with the Air Cherenkov Telescope H.E.S.S. [137], based on 112 hours of data. The limits are derived from a comparison of the measured gamma-ray fluxes in a search region within 1 • of the Galactic centre, and a background region just outside the inner 1 • . Limits are derived from the nonobservation of an excess of gamma-ray emission in the signal region over the flux measured in the background region.
We model the corresponding likelihood function as a Gaussian, where J sig(bg) denotes the J-factors in the signal (background) regions, f sig(bg) i the corresponding fluxes in 11 Note that here only the overall flux within the region | | < 20 • and 2 • < |b| < 20 • is rescaled. Variations in the signal morphology are not taken into account in the current treatment.
energy bin i, and R ≡ Ω sig /Ω bg is a geometrical rescaling factor that depends on the angular size Ω sig(bg) of the signal (background) region. For the dark matter profile, we assume by default the halo model employed in the corresponding scan. Alternatively, one can use the NFW profile used in Ref. [137] (local dark matter density of 0.39 GeV cm −3 at 8.5 kpc distance from the Sun).
In Ref. [137] limits are derived from a combination of all energy bins into one single wide energy bin from 265 GeV to 30 TeV. Using this same energy bin, we can reproduce the results from that work. However, Ref. [137] also provides enough information for a spectral analysis with 35 energy bins in the range 230 GeV to 30 TeV, which we implemented as well (see Table 7). It provides more accurate results in cases where the DM signal has a pronounced spectral structure (like the annihilation spectrum of τ -leptons). However, because the effects of energy dispersion are not included in the present version of the likelihood, results obtained with this likelihood should be interpreted with care. Spectral features like gamma-ray lines should be constrained by results from a dedicated line search.

Projected Galactic centre searches with CTA
As discussed in Ref. [140], the sensitivity of the future Cherenkov Telescope Array (CTA) to diffuse emission from DM annihilation will be not limited by statistics, but mainly by systematic uncertainties in the differential detector acceptance within the Field-of-View (FoV), and by the modelling of the Galactic diffuse emission. Ref. [140] addressed these issues and proposed a combined morphological analysis of the fluxes measured in different segments of the FoV. To this end, it is (optimistically) assumed that the Galactic diffuse emission can be modelled well up to an overall unconstrained normalisation. The results of that work can be represented as tabulated likelihood functions [152].
The corresponding likelihood function takes essentially the form of Eq. 44 above, and covers energies between 25 GeV and 10 TeV. The halo model is fixed to the Einasto profile with local density ρ χ = 0.4 GeV cm −3 , used in the original analysis (since this analysis was taking into account morphological information, it is for the projected CTA limits currently not possible to adopt the general halo model that is used in the scan). Further details about the adopted detector response, Galactic diffuse emission model and DM profile can be found in [140].

Library Interface (API)
The gamLike library interface is the same for all implemented experiments. There are four relevant functions that can currently be called:

Initialise the Galactic halo model
The experiment_tag is a C++ enum that corresponds to one of the likelhoods listed in Table 7. The relevant functions and the enum are declared in gamLike.hpp, and a C-compliant API for e.g. linkage with Fortran code is available in gamLike_C.hpp.
The range over which dΦ/dE is tabulated should cover the energy range of the activated experiments as shown in Table 7. Outside of the tabulated range it is assumed to be zero. The integration over energy bins is a simple trapezoidal integration based on the tabulation grid. This has the advantage that spectral features can be arbitrarily well resolved, provided the user chooses a fine enough grid in the critical energy range (but we note again that energy dispersion effects are not currently included in gamLike).

The Process Catalogue
One of the central structures in DarkBit is the 'Process Catalogue'. This is an object of C++ type DarkBit::TH_ProcessCatalog. Functions able to generate the Process Catalog (Table A1) have capability TH_ProcessCatalog. The Process Catalogue carries all relevant information about the decay and annihilation of particles. This information is mainly used to calculate dark matter annihilation rates, gamma-ray and neutrino yields for indirect searches. It can also be used for relic density calculations, although in this case coannihilation processes are currently not supported. The information from the Process Catalogue is also used in the cascade annihilation Monte Carlo, which we discuss in Sec. 6 GeV −3 "E","E1" 3-body decay (σv) cm 3 s −1 "v" 2-body ann.

Process Catalogue:
-Processes -Initial state: χχ • Channels ·bb: genRate · µ + µ − γ: genRate · . . . For a detailed example of how to set up the catalogue and access data within, we refer the reader to the source code of the DarkBit_standalone_WIMP example program described in Sec. 7.2.
The Process Catalogue carries a list of annihilation and decay processes. Each process has one (decays) or two (annihilations) initial states, and a list of decay/annihilation channels. Each channel consists of a list of two or three final state particles (more than three final state particles are currently not supported), as well as the specific rate for that channel given by genRate, which has type daFunk::Funk (see Sec. B for details). It provides information about the decay width or the annihilation cross section of the described process.
In the case of two-body decay, genRate is simply a constant that equals the decay width Γ in GeV (cf. Table 8). In the case of two-body annihilation, it is the annihilation cross-section (σv), given as a function of the relative velocity "v". For three-body decays, genRate refers to the differential decay width, which is a function of the two kinematic variables "E" and "E1", corresponding to the energy of the first and second final state particles, respectively. In the case of three-body annihilation final states, genRate refers to the differential annihilation rate, and has an additional dependence on the relative velocity "v" (as in the two-body case).
Lastly, each process also has a genRateMisc, which describes additional invisible contributions to the decay width or annihilation cross-section that are not associated with a specific channel, but can affect relic density calculations. The different roles of genRate are summarised in Table 8. Note that genRateMisc enters the calculation of W eff from the Process Catalogue (if this how the user has chosen to obtain W eff ), but does not directly affect annihilation yields.
Besides the list of processes, the catalogue also comes with a list of particle properties relevant for its processes. This section of the catalogue maps particle IDs onto particle masses and spin. Masses are required for calculating decay or annihilation kinematics. The remaining information is currently unused, but has obvious potential future applications.
We stress that channels involving three-body final states are conceptually different from those with twobody final states, because they cannot be implemented independently from the two-body states, unless the contribution from any of the associated two-body processes is absent or strongly suppressed. (An example of the latter situation is virtual internal bremsstrahlung from neutralinos annihilating to light fermions [109].) In general, three-body final states provide a correction to the tree-level result, and genRate hence returns the difference between the full NLO spectrum and the spectrum at tree level. The output can therefore be positive or negative. This implies that setting up many-body final states in the Process Catalogue requires detailed knowledge of how the tree-level annihilation or decay spectrum is defined. 12 12 Final state radiation of photons and gluons, for example, is often argued to contribute to the 'model-independent part' of the three-body spectrum, and to therefore typically be included already in tabulated yields from two-body final states obtained by Monte Carlo simulation. Sometimes even electroweak final state radiation is added following a similar philosophy [42]. In general, however, all these contributions are highly model-dependent and can differ substantially from the 'modelindependent' results [153][154][155]. Forqqg final states, for example, the change in the normalisation ofqq final states due to loop corrections at the same order in αs must also be included for consistency [153]. For three-body final states involving Higgs or electroweak gauge bosons, it is challenging to even define the contribution from a single annihilation or decay channel in a consistent way [154,155]. Although this can of course always be done formally for the sake of fitting into the structure of the Process Catalogue, no particular physical significance should be associated with any individual channel in that case. Rather, only the sum over all three-body channels provides a meaningful correction (to the total tree-level yield resulting from the sum over all assocaited two-body channels).
While the structure of the Process Catalogue in principle allows one to take into account all possible radiative corrections (with the above caveats in mind), currently only three-body final states involving hard photons are included explicitly in the Process Catalogue. Contributions from the decay and/or fragmentation of (on-shell) final state particles can be obtained either from tabulated yield tables (Sec. 6.3.2) or via DarkBit's own Fast Cascade Monte Carlo (FCMC; Sec. 6.3.4).

Gamma rays
As discussed above, the calculation of annihilation or decay spectra often involves tabulated results from event generators like Pythia [156]. In order to allow maximal flexibility with the adopted yield tables, DarkBit provides access to tabulated yields using a general structure called SimYieldTable. Currently, this structure makes it possible to import spectra from the DarkSUSY and micrOMEGAs backends 13 , but can easily be extended to other sources. The information carried by a SimYieldTable is summarised in the following.
Each one-or two-particle spectrum is defined by the ID(s) of the initial particle(s), the ID of the tabulated final-state particle (currently only gamma-rays are implemented), and the centre-of-mass energy range for which the tabulated spectrum is available. The spectrum itself is conveniently wrapped into a daFunk::Funk object (like genRate above, see Appendix B for more details). We emphasise that this object does not, in most cases, directly carry the tables from DarkSUSY or micrOMEGAs, but is merely a convenient and flexible wrapper that directly calls the corresponding backend functions.
In extreme cases, the tabulated yields implemented in different dark matter codes can differ substantially -mostly due to being based on different (versions of) event generators, but also due to different methods (or the absence of a method) for including contributions from higher-order processes. With the above structure, DarkBit offers a flexible and convenient way to select the desired yields and switch between them for detailed comparisons.
The gamma-ray yield is calculated by module functions with the capability GA_AnnYield, based on information from both the ProcessCatalog and the SimYieldTable. These are outlined in Table A7. Note that the resulting spectra are in general only partially based on the SimYieldTable, and can also make use of analytic expressions for e.g. three body final states, or include results from the FCMC (Sec. 6.3.4). The result of GA_AnnYield is a daFunk::Funk object. It refers to the physical expression m −2 χ · σv · dN/dE, which is equivalent to Eq. 43 up to a factor of 8π . It is a function of the photon energy "E" and of the relative velocity "v" (currently, only the v = 0 case is used for actual likelihood evalulations; adding velocity-dependent effects is planned for the near future). Note that the function object is in general a composite object that wraps various DarkSUSY and micrOMEGAs functions providing e.g. tabulated yields or differential cross-sections for three-body final states. Calling this generalised function at different values of E or v calls all of these backend functions behind the scenes, sums up and rescales their results, and performs phase space integrations if necessary. Note that the calculation of gamma-ray spectra often involves internal integrations (over 3-body phase space, or for boosting spectra into the lab frame). In cases where the integrations fail, a warning is issued, and the integration returns zero (see appendix B). This implies that derived upper limits are in all cases conservative.
If results from the FCMC are required, the Monte Carlo simulation is automatically run before the GA_AnnYield module function (see Sec. 6.3.4). It is the job of module functions with capability GA_missingFinalStates to determine, by comparing Process Catalogue entries and the SimYieldTable, for which final states the cascade annihilation Monte Carlo is necessary.
In Fig. 3, we show a number of annihilation spectra generated with GA_AnnYield, comparing the yields obtained from various backends, including line features. The processes are indicated in the legend of the figure.
One of the spectra shown in Fig. 3 corresponds to annihilation into Z 0 γ final states. The actual spectrum is calculated as combination of a monochromatic γ and the gamma-ray yield from decay of a single Z 0 . The latter is approximated by the tabulated spectrum from threshold decay into Z 0 Z 0 final states, with the photon yield divided by two. Gamma-ray likelihood functions in DarkBit make use of the backend gamLike (see Sec. 6.2 and Table 7). As detailed in Table A8, the resulting likelihoods are wrapped in various DarkBit module functions, with one capability for each experiment-target pair. The different options concerning J-factors or the version of a measurement can be selected with the run-time option version in the YAML file. The list of available capabilities and modes is: -lnL_FermiLATdwarfs (version = "pass7", "pass8"), -lnL_FermiGC (version = "fixedJ", "margJ", "margJ_HEP", "externalJ"), -lnL_HESSGC (version = "integral_fixedJ", "spectral_fixedJ", "integral_externalJ", "spectral_externalJ"), -lnL_CTAGC ().

Neutrinos
The different neutrino indirect detection capabilities of DarkBit are summarised in Table A9. The capabilities that describe the relevant WIMP properties are listed in Table A2.
The neutrino routines in DarkBit use the DM mass and nuclear scattering cross-sections to first calculate the DM capture rate in the Sun (capability capture_rate_Sun). The canonical way to do this is to call the corresponding function from DarkSUSY, which (at least in v5.1) assumes the AGSS09ph solar density profile [157,158], and does not distinguish between scattering on protons and neutrons. DarkBit uses the SI and SD cross-sections on protons for this purpose.
The capture rate is then used together with the latetime annihilation cross-section to solve Eq. 42 for the equilibration time t χ and annihilation rate (capabilities equilibration_time_Sun and annihilation_rate_Sun). Together with the contents of the ProcessCatalog, the annihilation rate is used to prime the calculation of the neutrino spectrum at Earth (essentially by setting appropriate common blocks in DarkSUSY with annihilation and Higgs decay information), producing a pointer to a function in DarkSUSY that can return the neutrino yield at the IceCube detector (capability nuyield_ptr). 14 This pointer is passed to nulike [17], which uses it to convolve the predicted differential neutrino flux with the various IceCube detector response functions, and evaluate the overall neutrino telescope likelihood for the model. DarkBit provides individual likelihoods from each of the three independent event selections included in the original 79-string IceCube analysis [134]: winter high-energy (WH), winter low-energy (WL) and summer low-energy (SL). The combined likelihood from all three of these searches is provided under the capability IC79_loglike; the individual likelihoods correspond to IC79WH_loglike, IC79WL_loglike and IC79SL_loglike. The earlier 22-string likelihood [133,159] is also available as IC22_loglike, and combined with all 79-string likelihoods as simply IceCube_likelihood. More information about these capabilities is available in Table A10.
When combining different search regions like this, we work with an effective log-likelihood defined as the difference between the actual log-likelihood and the log-likelihood of the background-only model. 15 We also impose a hard upper limit of zero to this effective likelihood, to prevent overfitting of spurious features at low energies, where the IceCube limits degrade steeply and the instrumental response functions (especially in energy) are least well understood. This has the impact of preventing exclusion of the background hypothesis, in much the same way as the CL s method [160,161].
14 The neutrino yield can also be calculated using routines in micrOMEGAs, but these functions are currently not backended in GAMBIT. We plan to add the ability to use them as an alternative to the DarkSUSY calculation in a future version of DarkBit. 15 This is the same strategy as employed by ColliderBit in combining different LHC searches; more information on the procedure can be found in that paper [11]. This case is somewhat simpler than the ColliderBit one, as we do not allow different signal region combinations for different model parameter values, given that the IceCube event selections are statistically independent by construction. Also unlike the LHC searches implemented in ColliderBit, here the different signal regions do not come with different numbers of counting bins -each has exactly one such 'bin', the Poisson counting term at the front of its likelihood function -but different event selections do bring different numbers of event terms into the unbinned part of the likelihood function [17].  The nulike speed parameter can be chosen from the master initialisation file, by setting the module function option nulike_speed. The default is 3. For production scans, we run nulike with this default, as this allows OpenMP-enabled neutrino likelihood evaluations for models with appreciable signal fractions to be achieved in walltimes of order one second. Parameter combinations resulting in negligible signal fractions run much faster, so the mean runtime is well below a second. This does come with an accuracy cost; Fig. 4 compares the accuracy of some example limits obtained with the different speed settings.
Following the original nulike paper [17], we assume a flat theoretical error on the predicted neutrino yield of 5% for DM masses below 100 GeV, rising logarithmically to 50% at masses of 10 TeV, and onward at even higher masses. This form of the error term is designed to account for neglected higher-order contributions and round off errors, present at all masses, and the increasing error introduced by DarkSUSY's interpolation in its WimpSim tables with increasing mass above ∼100 GeV.
As side products of the various likelihood calculations, DarkBit also provides module function access to various related nulike outputs for the different analyses (Table A10) These are all extracted from nudata objects, which are returned by functions with capabilities X_data.

Fast Cascade Monte Carlo (FCMC)
In DarkBit, the calculation of gamma-ray yields from cascade decays is implemented as a Monte Carlo, as the kinematics of long decay chains are too complicated to handle analytically. Codes like DarkSUSY, for example, take simplified angular averages over decay phase spaces. The cascade decay code has two main parts: a decay chain Monte Carlo and an accompanying analysis framework. The Monte Carlo code generates random decay chains based on relative branching fractions for individual decays. Currently, all particles in the decay chain are assumed to be on-shell, and only two-body decays are allowed in each step of the chain. Furthermore, spin correlations are neglected. The analysis framework interpolates the resulting histograms and creates wrapper functions for these interpolated spectra, which can be used in e.g. GA_AnnYield in order to derive the total annihilation yield. The generation of decay chains, and the following analysis steps are fully OpenMP-enabled. Each available CPU core independently generates and analyses events.
As mentioned in Sec. 6.3.2, it is the responsibility of each module function with capability GA_missingFinalStates to deterime, by comparing Process Catalogue entries with the SimYieldTable, for which initial state particles gamma-ray yields are required. The FCMC then generates decay chains for each of the identified initial states by Monte Carlo.
The decay chains in DarkBit are implemented as a doubly-linked tree: each particle in the decay chain is represented by an instance of a class named ChainParticle, which contains pointers to its parent particle, and any child particles (decay products). A decay chain is generated by first creating an initial state ChainParticle, which is initialised using a decay table containing relevant masses and decays for all particles that can occur in the cascade. The ChainParticle class features a member function named generateDecayChainMC, which recursively generates a decay chain. The function uses the FCMCinternal decay table (see below) to select a decay from the list of possible processes, using probabilities given by the relative branching fractions for the allowed decays. 16 The recursive decay continues until particles that are stable or have pre-computed decay spectra are reached, or until one of the pre-defined cutoff conditions is reached. These cutoff conditions are the maximum number of allowed decay steps, as specified by the YAML option cMC_maxChainLength, and a cut on the lab frame 17 energy of the decaying particle, specified by the YAML option cMC_Emin. The cutoff is triggered by whichever of these two conditions is reached first.
Once a full decay chain has been generated, the final state particles of the chain are collected and analysed, and final states of interest are histogrammed. Tabulated final state spectra are boosted to the lab frame. To this end, we sample photon energies from the tabulated spectra, boost these to the lab frame, and add the corresponding box spectra to the result histogram.
In the remaining part of this section, we provide information about the implementation in Dark-Bit. The relevant functions and capabilities are summarised in Tables A11 and A12. The module function cascadeMC_FinalStates provides a list of string identifiers ("gamma", "e+", "pbar", etc) that indicate which final states need to be calculated. This list can be set in the master YAML file using the option cMC_finalStates. The default (and currently only supported) option is a list with the single entry "gamma". The function cascadeMC_DecayTable generates an FCMCinternal list of all relevant decays (based on the content of the Process Catalogue). Next, the loop manager cascadeMC_LoopManagement runs a sequence of module functions with capabilities cascadeMC_InitialState → cascadeMC_ChainEvent → cascadeMC_Histograms → cascadeMC_EventCount that takes care of generating the Monte Carlo samples and histograms. Finally, cascadeMC_gammaSpectra uses the histogram results to generate interpolating daFunk::Funk objects, which are used in GA_AnnYield. Various options are available to tune the FCMC. We list them here, with default values in square brackets. Histogramming options: int cMC_NhistBins [140]: number of logarithmic bins of the generated histogram.  GeV of the generated histogram. int cMC_numSpecSamples [25]: the number of samples to draw from tabulated spectra. Convergence criteria: double cMC_gammaRelError[0.20]: the maximum allowed relative error in the bin with the highest expected signal-to-background ratio. double cMC_gammaBGPower[-2.5]: power-law slope to assume for astrophysical background when calculating the position of the bin with the highest expected signal-to-background ratio. int cMC_endCheckFrequency [25]: the number of events to wait between successive checks of the convergence criteria.
Note that if cMC_maxEvents is exceeded in cascadeMC_ LoopManagement before convergence is reached in cascadeMC_Histograms, the Monte Carlo will terminate before any convergence criteria are met.
We show example spectra generated with the FCMC in Fig. 5. These which were set up using the DarkBit WIMP standalone discussed in Sec. 7.2. Specifically, we set up a pair of DM particles annihilating to two scalars, χχ → φ 1 φ 2 , where φ 1 decays to a pair of photons and φ 2 tobb. In the rest frame of φ 1 , the resulting photons are monochromatic; in the galactic rest-frame of the annihilating DM particles, this leads to a flat "box" feature with the following spectrum (see e.g. [110]): Here, θ is a step function, ∆E ≡ E max − E min and where is the energy of φ 1 . These boxes are clearly seen in Fig. 5, with endpoints and normalisation of the numerical results agreeing nicely with the above analytical expression.
The decay φ 2 →bb produces a continuum spectrum of photons from the tabulated yields ofbb final states. Compared to the direct annihilation of DM tobb, as in χχ →bb, the form of the resulting photon spectrum is roughly retained but the peak (in E 2 γ dN/dE γ ) is shifted down by a factor of about 2 in energy [162] -essentially because each of the quarks now has on average only half the kinetic energy at its disposal. This part of the spectrum is clearly visible in Fig. 5 as the "background" of the box feature discussed above. At high energies, this part of the FCMC-produced spectrum is also seen to be affected by the mass of φ 1 (for constant m φ2 = m χ = 100 GeV). The reason is that the largest photon energy kinematically available from φ 2 →bb is given by E max as provided in the expression after Eq. 52, with φ 1 and φ 2 interchanged. Again, this is in agreement with the location of the cutoff visible in the figure.
We finally note that the cascade code in the current form does not handle off-shell decay, and neglects the finite widths of particles in the kinematics.

Examples
In this Section we present a few selected examples that illustrate the scope and potential applications of DarkBit.  The parameters and their ranges are shown in Table 10. We vary only one parameter at a time, whilst keeping all others fixed at the values shown in Table 9, leading to a scan over two "spokes" in parameter space for each model (Fig. 6).
For each point in the scan, we calculate spinindependent and spin-dependent DM-proton scattering cross sections, velocity-averaged DM annihilation cross sections at late times, and the DM relic density. For all of these observables, we also calculate a corresponding experimental likelihood using an appropriate backend code included with DarkBit: the LUX 2013 likelihood from DDCalc (Sec. 5.2), the IceCube 79-string likelihood for WIMP annihilation in the Sun from nulike (Sec. 6.3.3), and the the stacked dwarf spherodial likelihood based on six years of Fermi data from gamLike (Sec. 6.2.2). For the relic density, we calculate the simple Gaussian likelihood based on the best fit value from the Planck analysis [1] described in Sec. 4.3. The results of these scans are shown in Fig. 6, where the colour-coding of the points represents the likelihood value. For comparison, we plot the limits from corresponding analyses from the LUX [75], IceCube [17], and Fermi [136] collaborations. For the relic density, we plot the Planck best fit value and its 2σ uncertainty. In all cases, the likelihoods calculated by DarkBit and its associated backends agree with the results from the experimental collaborations.    Fig. 6. All of the parameters of the MSSM 7 are defined at an energy scale of 1 TeV.

Effective WIMPs
In order to further illustrate some of the functionality of DarkBit, and to show how DarkBit can be used without a full scan in GAMBIT, DarkBit ships with three example standalone programs. One of them, DarkBit_standalone_WIMP, shows how to set up and calculate various observables for a simple WIMP model, in which the three parameters are the WIMP mass and the cross sections for WIMP self-annihilation and WIMPnucleon scattering. We will discuss this example here in some detail. Further examples specific to singlet dark matter and the MSSM can be found as well; the MSSM example will be discussed in the next subsection. All of the model specifics for the standalone example are specified in only three module functions. These are defined as QUICK_FUNCTIONS at the beginning of the source file of the example. One function, DarkMatter_ID_WIMP, simply returns the string identifier for the WIMP particle. Another function, DD_couplings_WIMP, sets up the direct detection couplings. In the present example, these are entirely determined by module function op-  tions. The most complex function is the function that sets up the Process Catalog for the given example, TH_ProcessCatalog_WIMP. Besides the relevant processes, the masses and spins of the participating particles also have to be defined. Furthermore, we define a few func- tions to dump gamma-ray annihilation spectra into ASCII tables.
In the main part of the code, different options are available that illustrate how to calculate annihilation yields for various final states and DM masses. Furthermore, the standalone example has the ability to calculate tables of likelihoods for Fermi-LAT/HESS/CTA indirect detection and direct detection experiments as functions of the dark matter mass and the annihilation or scattering cross section; we show the corresponding upper limits in Figs. 7-9. These are 95% CL upper limits (obtained at ∆2 ln L = 2.71) in Fig. 7, as is customary for indirect searches, and at 90% CL in Figs. 8 and 9, as is typical in direct detection. For the direct detection limits, we determine the 90% CL value for ∆2 ln L from the Poisson upper limit on the expected number of events. For low statistics this value can be substantially larger than ∆2 ln L = 1.64, the value obtained in the asymptotic limit. The agreement between the official limits and those calculated with the standalone show that known results can be easily reproduced. The standalone example can also be used to calculate similar tables for the relic density. With this output, in Fig.  7 we indicate the cross-section for which the relic density reaches Ω χ h 2 = 0.1188, the preferred value from Planck [1].

Comparing DarkSUSY and micrOMEGAs
DarkBit offers the unique possibility to easily compare different numerical codes for the computation of DM properties in a well-defined and consistent way. For illustration, here we focus on DarkSUSY [15] and mi-crOMEGAs [167]. We stress, however, that it is straightforward for users to perform similar comparisons for essentially any other numerical code, simply by adding it as a backend to GAMBIT.
The ability of DarkBit to facilitate these comparisons for the MSSM is demonstrated in the example program DarkBit_standalone_MSSM. This program takes an SLHA file (including a DECAY block, if present) as input and calculates the relic density and DM-nucleon scattering cross sections using both DarkSUSY and mi-crOMEGAs. Analogously to DarkBit_standalone_WIMP, it also calculates likelihoods for the relic density, direct detection experiments, and indirect searches in neutrinos and gamma rays. The standalone shows how it is possible to change the source of the theoretical inputs for these likelihood calculations (such as the DM-nucleon coupling in the case of direct detection) by just changing a single line of code.
As a demonstration of the sorts of comparisons possible, we have chosen some benchmark MSSM 18 model points that can cause difficulties in the calculation of the relic density due to coannihilations or resonances. Details of the points are given in Table 11. We generated SLHA files for each of these model points (including DECAY blocks) using SpecBit and the standalone example 3-BIT-HIT [13], which we then fed into DarkBit_standalone_MSSM. These SLHA files can be found in the DarkBit/data/benchmarks/ directory of the GAMBIT distribution. In GAMBIT 1.1.0, which was used to find the results presented here, DarkBit_standalone_MSSM by default calculates the relic density with the YAML option fast set to 0 (corresponding to a more accurate calculation) in both RD_oh2_MicrOmegas and RD_oh2_DarkSUSY. The options loop and box are set to true in DD_couplings_DarkSUSY and DD_couplings_MicrOmegas respectively, so that all available loop corrections are used in each backend.
Results of the calculations can be seen in Table 12. While the values of Ωh 2 from the two backends agree well for some of the benchmarks, in others there are significant differences. In particular for benchmarks 1-4, where there is resonant annihilation via the A 0 or h, the relic density is substantially higher when the calculation is done using DarkSUSY. This can be traced to the fact that the micrOMEGAs σv eff at temperatures around freeze-out for these points is consistently larger than the DarkSUSY result. The fractional differences are largest on the resonance; adjusting m A0 or m h away from 2m χ0 leads to much better agreement between the two codes. The ultimate source of this discrepancy should be tracked down, and to this end we are planning a follow-up study using the GAMBIT framework in which we investigate the reasons behind the differences and look into the effects of adding loop corrections currently not included in both backends.
For some of the benchmarks, there are also significant differences between the nuclear scattering crosssections computed with DarkSUSY and micrOMEGAs, specifically in cases where the cross-sections are small. The discrepancies between the two codes are almost certainly linked to the fact that at tree level, small nuclear scattering cross-sections in the MSSM arise due to cancellations between different diagrams. The cancellations can easily be spoilt by small changes in the input parameters, or equivalently, different choices of spectrum generator and/or treatments of running parameters in the calculation of the matrix elements, as well as different treatments of loop corrections in the calculations of scattering amplitudes. As in the case of the relic density, we are planning a future study to more precisely understand the source of these differences.

Outlook
As detailed in the preceding sections, DarkBit is equipped with sophisticated tools for calculating observables and likelihoods for the DM relic density, direct detection experiments and indirect searches with neutrinos and gamma rays. Each of these cases demonstrates the modularity of the code, and the ease with which external codes can interfaced with DarkBit. This modularity also implies that extensions of DarkBit in all possible directions are straight-forward to implement, and do not in general require the expertise of GAMBIT Collaboration members or highly experienced external users of the code. The focus of future developments will thus be steered largely by the needs (and indeed, contributions) of the community. Nevertheless, here we list a few obvious code extensions that we expect to include in future releases (aside from obvious additions of new experimental likelihoods to existing components like gamLike, DDCalc and nulike).
The combination of the process catalogue and the DarkSUSY Boltzmann solver currently allows us to calculate the relic density for simple arbitrary DM models. We intend to expand this framework so that coannihilations can be included in the process catalogue and the relic density can be calculated in the case of semiannihilating [168] and asymmetric DM [169]. This would complement the existing capabilities of micrOMEGAs to handle these scenarios. We also plan to backend MadDM [170,171] in a future version of DarkBit, which with its interface to MadGraph [172] will be a useful alternative to micrOMEGAs for quickly implementing new DM models. To enhance the accuracy of our relic density calculations, we also intend to add the effects of Sommerfeld enhancement on the relevant (co)annihilation cross sections [173,174]. Moving beyond the standard relic density calculation, we have plans to include the ability to deal with non-standard cosmological expansion histories. This capability is available in SuperIso Relic [175], to which we will provide a frontend. A final natural extension in this area is to calculate kinetic freeze-out of DM from the thermal bath rather than only chemical freeze-out as is done now. This would lead to an additional observable: a cutoff in the power spectrum of matter density perturbations [176,177].
For direct detection experiments, the implementation of velocity-and momentum-dependent cross sections in DDCalc will be a high priority extension, allowing one to systematically study the full set of available non-relativistic operators [178], for example. Helio-and astroseismological probes of DM-nucleon couplings (see e.g. [123]) are another expected extension.
The most important extension relevant for indirect DM searches, given the high precision expected from the AMS-02 experiment on board the international space station, is charged cosmic rays. Indeed, positron data already put one of the most stringent limits on leptophilic DM models [106], and constraints from antiprotons can likely be improved considerably [103,104]. Another extension that we aim for in the near future is to fully allow for velocity-dependent annihilation cross sections, such as in the case of resonances or the Sommerfeld effect [179,180]. These can be relevant for e.g. DM annihilation in subhalos [181], or close to the black hole at the Galactic centre [182,183].

Conclusions
The particle nature of DM is one of the most perplexing puzzles in present-day particle physics and cosmology. Despite decades of research, only non-gravitational signals of DM have been identified so far. However, rapid experimental developments in recent years have provided a wealth of new data that can be exploited in the search for DM signals, and used to constrain DM models. In order to facilitate a systematic study of a large number of DM scenarios, in this paper we presented DarkBit, a new numerical tool for DM calculations.
DarkBit is designed to allow DM observables to be included in global scanning tools like GAMBIT. It can also be used as standalone module. The first release of DarkBit ships with a large number of likelihood functions   The dark matter relic density and proton-scattering cross-sections, both spin-independent and spin-dependent, for a range of MSSM model points. The model points are defined in Table 11. All quantities were calculated with DarkBit_standalone_MSSM using both the DarkSUSY and micrOMEGAs backends.
for various experiments. These are implemented more accurately than what is usually done in the literature. The overarching design goals are reusability, self-consistency and modularity, which are achieved in a number of ways. First, by allowing seamless integration of existing numerical tools like DarkSUSY and micrOMEGAs, using the GAMBIT method of abstracting backend functionhandling for cross-language coding environments. Second, by providing internal structures for particle and astrophysical DM properties that are consistently used in all calculations, and passed to external codes if necessary. Third, by splitting up calculations into their most elemental building blocks wherever possible.
The modular implementation of the DM relic density calculations in DarkBit allows it to solve the Boltzmann equation independently of the actual particle model chosen for DM (Fig. 2). Alternatively, the user can directly call relic density routines provided by backend codes for specific models, allowing, for example, a systematic comparison between the results of DarkSUSY and micrOMEGAs.
The new backend code DDCalc provides a general solution to the problem of testing DM models against direct detection data, including detailed likelihoods for many of the most important experiments. This allows both spin-dependent and spin-independent signals of the same model to be calculated and combined selfconsistently across the full range of relevant experiments. For liquid noble gas detectors, the sensitivities in DDCalc are based on the output of TPCMC [84], a dedicated detector Monte Carlo simulation.
We have implemented likelihood functions for gamma-ray indirect DM searches in the new backend gamLike. We included Fermi-LAT and HESS observations of dwarf spheroidal galaxies and the Galactic centre, as well as projections for the future with CTA. The likelihood functions in gamLike are pre-tabulated for fast evaluation, but based on event-level (mock) data where possible. DarkBit also includes a new Monte Carlo code for the fast simulation of cascade annihilation spectra, and an interface to the event-level neutrino telescope likelihood tool nulike for calculating neutrino indirect detection likelihoods.
The first release of DarkBit ships with the essentials of DM indirect searches. Extensions planned for the near future include charged cosmic rays, accurate treatment of velocity-dependent annihilation cross-sections and Sommerfeld enhancement, and inclusion of new experimental analyses in gamLike. In direct detection, we plan to implement velocity-and momentum-dependent crosssections in DDCalc, as well as new experimental results as they become available. Furthermore, new classes of likelihoods, like helio/astroseismological probes for DM and limits from radio and CMB observations, will be included in future releases.
As described in Sec. 2, DarkBit is a standalone and complimentary module of the GAMBIT software, which can be downloaded from the official GAMBIT website 19 . In the following, we describe the content of the DarkBit standalone download, the installation of the DarkBit software as standalone or GAMBIT module, and the running of the example program.

A.1: Content of DarkBit download & installation
Each GAMBIT module contains the -Backends (utility functions used for backend interfaces) -Models (predefined BSM models and utility functions used for model definitions) -Logs (general GAMBIT logging system); -Utils (GAMBIT utility functions); -Elements (general GAMBIT macro and type definitions) folders in addition to the specific module folder (here the DarkBit folder). A detailed description of the GAM-BIT functionalities can be found in the GAMBIT main paper [10]. Each standalone module requires all of these folders to work. If the DarkBit module is used in conjunction with other GAMBIT modules, only the DarkBit folder is needed and should be placed into the main folder containing the other GAMBIT modules.
GAMBIT uses the open-source cross-platform build system CMake 20 . CMake supports in-source and out-ofsource builds, but we recommend the latter to keep the source directory unchanged and enable multiple builds. To do such a build, run the following commands in the directory that contains the GAMBIT module folders: These examples show how to specify a model, prior ranges over which to sample its parameters, a scanner and an output device ('printer'), then run the relic density, gamma-ray, neutrino and direct search likelihoods. The user can also select the parameters of the halo model and the nuclear parameters relevant for direct detection. The examples require the micrOMEGAs, gamLike, DDCalc, DarkSUSY, nulike, FeynHiggs and SUSY-HIT backends to be present, which can be accomplished by running the following commands in the GAMBIT build directory make micromegas_MSSM make micromegas_SingletDM make gamlike make ddcalc make darksusy make nulike make feynhiggs make susyhit The yaml file is complete, i.e. all options of all module functions available in DarkBit are mentioned and documented there. One of the major technical challenges when combining a large number of different codes but trying to maintain maximum portability is wrapping and manipulating Fortran, plain C and C++ object member functions in a systematic and coherent way. In DarkBit, quite commonly functions of the type R n → R (which describe e.g. annihilation yields, dark matter profiles, velocity distributions, differential cross sections, or effective annihilation rates) need to be wrapped in a generic structure so they can be shared amongst different backends. Typically, the results of these functions need to be manipulated before they can be used, in order to comply with conventions and requirements in the subsequent codes. Sometimes this means using basic arithmetic operations, sometimes passing them through complex trigonometric functions or performing variable substitution. Further common operations are partial integrations, sometimes with non-constant boundaries and singularity handling, or checks of parameter domains. Often, these manipulated functions need to be wrapped back into plain C functions in order to be able to pass them back into the backend codes (like e.g. the DarkSUSY Boltzmann solver).
In order to facilitate all these operations, we present the new lightweight C++ header-only library daFunk (dynamisch allokierbare Funktionen). Despite the complex function handling that it allows, the daFunk API is relatively simple. This is achieved with recursive variadic templates, polymorphism and shared pointers. The atomic object in daFunk is daFunk::Funk, which is a shared pointer to an instance of the daFunk::FunkBase class. Importantly, the daFunk::FunkBase class is an abstract base class: it leaves the virtual member function responsible for all actual calculations, virtual double value(...), undefined. The main purpose of daFunk::Funk is to provide a unified interface and a flexible C++ type, independent of whatever calculation it actually performs. The actual computations are implemented in classes derived from daFunk::FunkBase.
Each daFunk::Funk object provides a list of names of variables on which it depends. This list is simply a list of std::string tags. The specific content of this list depends on the implementation of value. The most basic implementations are variables and constants, shown in the following simple example: daFunk::Funk x = daFunk::var("x"); // variable x daFunk::Funk y = daFunk::var("y"); // variable y daFunk::Funk c = daFunk::cnst(2.); // constant 2 daFunk::Funk f = c*x+3*cos(y); //f (x, y) ≡ 2x + 3 cos(y) The name of a variable is specified as a std::string, and daFunk::Funk objects can be combined into new capability A name describing the actual quantity that is calculated by a module or backend function. This is one possible place for units to be noted; the other is in the documented description of the capability (see Sec. 10.7 of Ref. [10]). dependency A declaration that a given module function needs to be able to access the result of another module function, identified according to its capability and type. Dependencies are declared in module functions' entries in rollcall headers. dependency resolution The process by which GAM-BIT determines the module functions, backend functions and backend variables needed and allowed for a given scan, connects them to each others' dependencies and backend requirements, and determines the order in which they must be called. dependency tree A result of dependency resolution; a directed acyclic graph of module functions connected by resolved dependencies. See Fig. 5 of Ref.
[10] for an example. frontend The interface between GAMBIT and a given backend, consisting of a frontend header plus optional source files and type headers. frontend header The C++ header in which the frontend to a given backend is declared. module A subset of GAMBIT functions following a common theme, able to be compiled into a standalone library. Although module often gets used as shorthand for physics module, this term technically also includes the GAMBIT scanning module ScannerBit. module function A function contained in a physics module. It calculates a specific quantity indicated by its capability and type, as declared in the module's rollcall header. It takes only one argument, by reference (the quantity to be calculated), and has a void return type. physics module Any module other than ScannerBit, containing a collection of module functions following a common physics theme. rollcall header The C++ header in which a given physics module and its module functions are declared. type A general fundamental or derived C++ type, often referring to the type of the capability of a module function.

Appendix D: Capability overview
For reference, we provide a complete list of the DarkBit capabilities, dependencies and options. These include the complete process and coupling capabilities (Table A1), some simple informative capabilities (Table A2), capabilities related to DM halo properties (Table A3), relic density capabilities (Tables A4 and A5), direct detection capabilities (Table A6), gamma-ray yield capabilities (Table A7), gamma-ray likelihoods (Table A8), neutrino capabilities (Tables A9 and A10), cascade decay capabilities (Tables A11 and A12), and various miscellaneous capabilities (Table A13).