Abstract
We assess the status of a wide class of WIMP dark matter (DM) models in light of the latest experimental results using the global fitting framework GAMBIT. We perform a global analysis of effective field theory (EFT) operators describing the interactions between a gaugesinglet Dirac fermion and the Standard Model quarks, the gluons and the photon. In this bottomup approach, we simultaneously vary the coefficients of 14 such operators up to dimension 7, along with the DM mass, the scale of new physics and several nuisance parameters. Our likelihood functions include the latest data from Planck, direct and indirect detection experiments, and the LHC. For DM masses below 100 GeV, we find that it is impossible to satisfy all constraints simultaneously while maintaining EFT validity at LHC energies. For new physics scales around 1 TeV, our results are influenced by several small excesses in the LHC data and depend on the prescription that we adopt to ensure EFT validity. Furthermore, we find large regions of viable parameter space where the EFT is valid and the relic density can be reproduced, implying that WIMPs can still account for the DM of the universe while being consistent with the latest data.
1 Introduction
Despite years of searching, the identity of dark matter (DM) remains a mystery. Nevertheless, the large number of past, present and future probes of its particle interactions makes it essential to regularly revisit the constraints on the most popular theoretical candidates, in order to guide future searches.
A favoured paradigm for the particle nature of dark matter is that of Weakly Interacting Massive Particles (WIMPs), due to the fact that it allows for a simple thermal mechanism to produce DM with the cosmologicallyobserved abundance [1]. Such models have also attracted attention due to the large number of possible signals they predict, none of which have been definitively observed so far. Although this has led some to make claims of the demise of WIMPs [2], others have argued that such predictions are premature [3].
A relatively agnostic approach to WIMP model building is to pursue a bottomup, Effective Field Theory (EFT) approach, in which one enumerates all of the allowed higher dimensional operators which lead to interactions between DM and Standard Model (SM) particles. Any result described by an EFT can in general be explained by many highenergy theories. In this way, the EFT description is a modelindependent one, as it does not depend on the Ultraviolet (UV) completion that describes an effective operator. This is, however, a doubleedged sword: because an effective operator does not encode any information about the UV completion, it has no constraining power in distinguishing between the range of UV theories that can map to it – nor can all UVcomplete theories be mapped to an EFT description for the energies we are interested in here.
In spite of these limitations, the bottomup approach is welladvised given the lack of direct evidence pointing to the properties of DM. The EFT approach in particular is highly suitable for lowvelocity environments such as direct detection [4,5,6,7,8,9,10] and indirect detection [11,12,13,14,15,16,17]. At higher energy scales, the EFT approach starts breaking down, such that simplified models have become the theories of choice for the interpretation of LHC searches [18, 19] (see also Refs. [20, 21] for a hybrid approach called “Extended Dark Matter EFT”). Nevertheless, there is an extensive literature on EFTs at colliders [22,23,24,25,26,27,28,29,30,31,32] including studies by ATLAS [33] and CMS [34], which may help to shed light on the nature of DM when interpreted with care.
A common approach to the analysis of EFTs for DM in the literature has been to consider a single operator at a time [35,36,37,38,39,40,41] and compare experimental bounds on the new physics scale \({\Lambda }\) with the values implied by the observed DM relic density. This method, however, severely limits the scope of the analysis and potentially leads to overlyaggressive exclusions, not only because it neglects (potentially destructive) interferences between different operators [42], but also because the relic density constraint can be considerably relaxed when several operators contribute to the DM annihilation crosssection. The first global study of EFTs for scalar, fermionic and vector DM taking interference effects into account was performed in Ref. [43], but no collider constraints were included in the analysis and no couplings to gluons were considered. More recently, Ref. [44] applied Bayesian methods to perform a global analysis of scalar DM, for which only a small number of effective operators need to be considered and collider constraints can be neglected. Examples of global studies considering subspaces of a general DM EFT include Refs. [45,46,47].
In the present work, we exploit the computational power of the GAMBIT framework [48] to perform the first global analysis of a very general set of effective operators up to dimension 7 that describe the interactions between a Dirac fermion DM particle (or a DM subcomponent) and quarks or gluons. Such a setup arises for example in many extensions of the SM gauge group, such as gauged baryon number [49] or other anomalyfree gauge extensions that require additional stable fermions [50, 51]. Our novel approach of considering many operators simultaneously enables us to study parameter regions where several types of DM interactions need to be combined in order to satisfy all constraints. Our analysis substantially improves upon the previous stateoftheart in both the statistical rigour with which the DM EFT parameter space is interrogated, and in the new combinations of constraints that are simultaneously applied. We also increase the level of detail with which individual constraints are modelled, summarised as follows.
First, we include a much improved calculation of direct detection constraints using the GAMBIT module DarkBit [52]. We consider the renormalization group (RG) evolution of all effective operators from the electroweak to the hadronic scale and then match the relativistic operators onto the nonrelativistic effective theory [53] relevant for DMnucleon scattering. We then calculate event rates in direct detection experiments to leading order in the chiral expansion, including the contributions from operators that are naively suppressed in the nonrelativistic limit, and determine the resulting constraints using detailed likelihood functions for a large number of recent experiments. In the process, we include a number of nuisance parameters to account for uncertainties in nuclear form factors and the astrophysical distribution of DM.
Second, we consider the most recent constraints on DM annihilations using gamma rays and the Cosmic Microwave Background (CMB). To include the latter, we employ the recently released GAMBIT module CosmoBit [54], which uses detailed spectra to calculate effective functions for the efficiency of the injected energy deposition and obtain constraints on the DM annihilation crosssection while varying cosmological parameters. For the calculation of annihilation crosssections we make use of the new GAMBIT Universal Model Machine (GUM) [55, 56] to automatically generate the relevant code based on the EFT Lagrangian.
Third, we combine the above detailed astrophysical and cosmological constraints with a stateoftheart implementation of LHC constraints on WIMP dark matter. A central concern for any study of EFTs is the range of validity of the EFT approach [57,58,59,60,61,62,63,64]. This is particularly true when considering constraints from the LHC, which may probe energies above the assumed scale of new physics. A naive application of the EFT in such a case may lead to unphysical predictions, such as unitarity violation. Whenever this is the case it becomes essential to adopt some form of truncation to ensure that only reliable predictions are used to calculate experimental constraints.
In the present work we address these challenges in two key ways. First, we separate the scale of new physics \({\Lambda }\) from the individual Wilson coefficients (rather than scanning over a combination such as ), such that the former can be directly interpreted as the scale where the EFT breaks down and the latter can be constrained by perturbativity. Second, we check the impact of a phenomenological nuisance parameter that describes the possible modification of LHC spectra at energies beyond the range of EFT validity. The nuisance parameter smoothly interpolates between an abrupt truncation and no truncation at all.
Our analysis reveals viable parameter regions for general WIMP models across a wide range of new physics scales, including very small values of \({\Lambda }\) (\({\Lambda }< 200 \, \text {GeV}\)), where there are no relevant LHC constraints and very large values of \({\Lambda }\) (\({\Lambda }> 1.5 \,\text {TeV}\)), where LHC constraints are largely robust. Of particular interest are the intermediate values of \({\Lambda }\) (\({\Lambda }\sim 700{\text {}} 900 \, \text {GeV}\)), for which our DM EFT partly accommodates several small LHC data excesses that could be interesting to analyse in more detail in the context of specific UV completions or simplified models. However, our analysis also reveals that there cannot be a large hierarchy between \({\Lambda }\) and the DM mass \(m_\chi \). In particular, even with the most general set of operators we consider, it is impossible to simultaneously have a small DM mass (\(m_\chi \lesssim 100 \, \text {GeV}\)) and a large new physics scale (\({\Lambda }> 200 \, \text {GeV}\)). In other words, for light DM to be consistent with all constraints, it is necessary for the new physics scale to be so low that the EFT approach breaks down for the calculation of LHC constraints. For heavier DM, on the other hand, thermal production of DM in the early universe would exceed the observed abundance whenever \({\Lambda }\) is more than one order of magnitude larger than \(m_\chi \) (up to the unitarity bound at a few hundred TeV [65], where the maximum possible value of \({\Lambda }\) approaches \(m_\chi \)).
This work is organised as follows. We introduce the DM EFT description in Sect. 2. In Sect. 3, we discuss the constraints used in this study, and our methods for computing likelihoods and observables. We present our results in Sect. 4. Finally, we present our conclusions in Sect. 5. The samples from our scans and the corresponding GAMBIT input files, and plotting scripts can be downloaded from Zenodo [66].
2 Dark matter effective field theory
In this study, we consider possible interactions of SM fields with a Dirac fermion DM field, \(\chi \), that is a singlet under the SM gauge group. For phenomenological reasons discussed in detail in Sect. 3, we focus on interactions between \(\chi \) and the quarks or gluons of the SM. We assume that the mediators that generate these interactions are heavier than the scales probed by the experiments under consideration. Following the notation of Refs. [67, 68], the interaction Lagrangian for the theory can be written as
where is the DMSM operator, \(d\ge 5\) is the mass dimension of the operator, is the dimensionless Wilson coefficient associated to , and \({\Lambda }\) is the scale of new physics (which can be identified with the mediator mass). The full Lagrangian for the theory is then
such that the free parameters of the theory are the DM mass \(m_\chi \), the scale of new physics \({\Lambda }\), and the set of dimensionless Wilson coefficients .
For sufficiently large \({\Lambda }\), the phenomenology at small energies is dominated by the operators of lowest dimension, and we therefore limit ourselves to \(d \le 7\). However, even this leaves a relatively large set of operators. The DM EFT that is valid below the electroweak (EW) scale (with the Higgs, W, Z and the top quark integrated out) contains 2 dimension five, 4 dimension six, and 22 dimension seven operators (not counting flavour multiplicities), while the DM EFT above the EW scale for a singlet Dirac fermion DM has 4 dimension five, 12 dimension six, and 41 dimension seven operators (again, not counting flavour multiplicities) [68]. The large set of possible operators poses a challenge for a global statistical analysis where bounds on \({\Lambda }\) and are derived from experimental observations (see Sect. 3 for details). An added complexity is that we consider both processes where the typical energy transfer is above the EW scale (such as collider searches and indirect detection) as well as processes in which the energy release is small (direct detection). The consistent implementation of these bounds requires the combination of both DM EFTs, together with the appropriate matching conditions between the two.
To make the problem tractable we focus in our numerical analysis on a subset of DM EFT operators  the dimension six operators involving DM, \(\chi \), and SM quark fields, q,
The difference between the DM EFT below the EW scale and the DM EFT above the EW scale is in this case very simple: above the EW scale the quark flavours run over all SM quarks, including the top quark, while below the EW scale the top quark is absent.
While the above set of operators does not span the full dimension six bases of the two DM EFTs, it does collect the most relevant operators. The full dimension six operator basis contains operators where quarks are replaced by the SM leptons. These are irrelevant for the collider and direct detection constraints we consider, and are thus omitted for simplicity. The basis of dimension six operators for the DM EFT above the EW scale contains, in addition, operators that are products of DM and Higgs currents. These are expected to be tightly constrained by direct detection to have very small coefficients such that they are irrelevant in other observables, and are thus also dropped for simplicity.
To explore to what extent the numerical analyses would change, if the set of considered DM EFT operators were enlarged, we also perform global fits including, in addition to the dimension six operators (3)–(6), a set of dimension seven operators that comprise interactions with the gluon field either through the QCD field strength tensor \(G^a_{\mu \nu }\) or its dual \({\widetilde{G}}_{\mu \nu }=\frac{1}{2}\epsilon _{\mu \nu \rho \sigma }G^{\rho \sigma }\), as well as operators constructed from scalar, pseudoscalar and tensor bilinears:
The definition of the operators describing interactions with the gluons, , includes a loop factor since in most new physics models these operators are generated at one loop. Similarly, the couplings to scalar and tensor quark bilinears, , include a conventional factor of the quark mass \(m_q\), since they have the same flavour structure as the quark mass terms (coupling lefthanded and righthanded quark fields). The \(m_q\) suppression of these operators is thus naturally encountered in new physics models that satisfy low energy flavour constraints, such as minimal flavour violation and its extensions. Note that, unless explicitly stated otherwise, \(m_q\) always refers to the running mass in the modified minimal subtraction (\(\overline{\mathrm{MS}}\)) scheme.
The complete dimensionseven basis below the EW scale contains eight additional operators with derivatives acting on the DM fields [68]. To simplify the discussion we do not include these operators in our analysis, partially because they do not lead to new chiral structures in the SM currents. Moreover, the direct detection constraints on these additional operators are expressible in terms of the operators that we do include in the global fits due to the nonrelativistic nature of the scattering process.
Note that the operators are not invariant under EW gauge transformations, and are thus replaced in the DM EFT above the EW scale by operators of the form \(({\bar{\chi }}\chi )({\bar{q}}_Lq_R)H\), where H is the Higgs doublet. In all the processes we consider, H can be replaced by its vacuum expectation value – either because the emission of the Higgs boson is phasespace suppressed or suppressed by small Yukawa couplings, or both. This means that, up to renormalization group effects (to be discussed in Sect. 2.1), the operators can also be used in our fitting procedure above the EW scale.
In principle, analogous operators to exist for leptons instead of quarks [69, 70] and weak gauge bosons instead of gluons [71,72,73].^{Footnote 1} In general, these play a much smaller role in the phenomenology and will not be considered here. Similarly, throughout this work the Wilson coefficients of any dimension five operators are set to zero at the UV scale.
The Wilson coefficients of the operators defined above depend implicitly on the energy scale of the process under consideration. In our fits, all Wilson coefficients are specified at the new physics scale \({\Lambda }\). If this scale is larger than the top mass, \({\Lambda }> m_t\), all six quarks are active degrees of freedom and the Wilson coefficients need to be specified for \(q = u, d, s, c, b, t\). For \({\Lambda }< m_t\), the top quarks are integrated out, and only the Wilson coefficients for \(q = u, d, s, c, b\) need to be specified. This is done automatically in our fitting procedures, such that effectively both EFTs are used in the fit, according to the numerical value of the scale \({\Lambda }\).
Although, a priori, the Wilson coefficients for each quark flavour are independent, we will restrict ourselves to the assumption of minimal flavour violation (which implies and ), and the assumption of isospin invariance (which implies ).^{Footnote 2} Hence, each operator comes with only one free parameter in addition to the global parameters \({\Lambda }\) and \(m_\chi \). Under these assumptions, the two EFTs above and below the EW scale have the same number of free parameters.
2.1 Running and mixing
For many applications, the RG running of the Wilson coefficients (i.e. their dependence on the energy scale \(\mu \)) can be neglected. In fact, the operators , , and have vanishing anomalous dimension, while , , , as well as exhibit no running at oneloop order in QCD [77]. Nevertheless, there are two cases when the effects of running can be important:

1.
Mixing: Different operators can mix with each other under RG evolution, such that operators assumed negligible at one scale may give a relevant contribution at a different scale. This is particularly important in the context of direct detection, because for certain operators the DMnucleon scattering crosssection is strongly suppressed in the nonrelativistic limit. In such a case, the dominant contribution to direct detection may arise from operators induced only at the loop level [78, 79]. In our case, the dominant effects arise from the top quark Yukawa and are discussed below.

2.
Threshold corrections: Whenever the scale \(\mu \) drops below the mass of one of the quarks, the number of active degrees of freedom is reduced and a finite correction to various operators arises. In our context, the only effect is the matching of the operators onto the operators at the heavy quark thresholds, which is given by
(17)Mixing of the tensor operators above the EW scale and subsequent matching gives rise to the dimensionfive dipole operators
(18)(19)where \(F_{\mu \nu }\) is the electromagnetic field strength tensor and e is the electromagnetic charge. These operators give an important contribution to direct detection experiments and are thus kept.^{Footnote 3}
In the present work we include these effects as follows. To calculate the Wilson coefficients at the hadronic scale \(\mu = 2\,\text {GeV}\) (relevant for direct detection) we make use of the public code DirectDM v2.2.0 [67, 68], which calculates the RG evolution of the operators defined above, including threshold corrections and mixing effects. The code furthermore performs a matching of the resulting operators at \(\mu = 2\,\text {GeV}\) onto the basis of nonrelativistic effective operators relevant for DM direct detection (see Sect. 3.1).
DirectDM currently requires as input the Wilson coefficients in the fiveflavour scheme given at the scale \(m_Z = 91.1876\,\text {GeV}\). For \({\Lambda }< m_t\) (fiveflavour EFT), we can therefore directly pass the Wilson coefficients defined above to DirectDM. For \({\Lambda }> m_t\) (sixflavour EFT), there are three additional effects that are considered. First, as pointed out in Ref. [80], the operators give a contribution to the dipole operators at the oneloop level, which is given by^{Footnote 4}
Second, as pointed out first in Ref. [81], the operator with an axialvector topquark current mixes into the operators with light quark vector currents. The relevant effects are given by [78]
after integrating out the Z boson at the weak scale. Here, \(s_w\equiv \sin \theta _w\) with \(\theta _w\) the weak mixing angle, and \(v=246\,\)GeV is the Higgs field vacuum expectation value. The flavour universal UV contributions largely compensate the mixing effect in the fit; the remnant effect, due to the isospinbreaking Z couplings, is small.
Third, in order to match the EFT with six active quark flavours onto the fiveflavour scheme, we need to integrate out the top quark and apply the top quark threshold corrections given in Eq. (17). We neglect any other effects of RG evolution between the scales \({\Lambda }\) and \(m_Z\), i.e. all Wilson coefficients other than and are directly passed to DirectDM.^{Footnote 5}
For the purpose of calculating the LHC constraints, we neglect the effects of running and do not consider loopinduced mixing between different operators, which is a good approximation for the operators and . For the operators mixing effects are known to be important in principle [82], but these operators are currently unconstrained by the LHC in the parameter region where the EFT is valid (see Sect. 2.2). Likewise we also calculate DM annihilation crosssections at tree level. In particular, in these calculations we neglect the running of the strong coupling \((\alpha _s)\) and use the pole quark masses \((m_q^{\text {pole}})\) instead of the running quark masses. Moreover, we neglect a small looplevel contribution from the operators to the operators .
2.2 EFT validity
A central concern when employing an EFT to capture the effects of new physics is that the scale of new physics must be sufficiently large compared to the energy scales of interest for the EFT description to be valid. Unfortunately, the point at which the EFT breaks down is difficult to determine from the lowenergy theory alone. Considerations of unitarity violation make it possible to determine the scale where the EFT becomes unphysical, but in many cases the EFT description already fails at lower energies, in particular if the UV completion is weakly coupled.
To address this issue in the present study, we simultaneously vary the overall scale \({\Lambda }\), which corresponds to the energy where new degrees of freedom become relevant and the EFT description breaks down, and the Wilson coefficients for each operator. Doing so introduces a degeneracy, because cross sections are invariant under the rescaling \({\Lambda }\rightarrow \alpha {\Lambda }\) and . However, the advantage of this approach is that the parameter \({\Lambda }\) can be used to determine which constraints can be trusted in the EFT limit. This is illustrated in Fig. 1, which compares our approach of varying \({\Lambda }\) and separately to the naive approach where only is constrained.
We emphasize that this approach assumes the same newphysics scale for all effective operators, even though they may be generated through different mechanisms, and hence at different scales, in the UV. In practice, one should think of \({\Lambda }\) as the minimum of all of these scales, i.e. the energy at which new degrees of freedom first become relevant. These new degrees of freedom may not contribute to all processes, such that some effective operators may provide an accurate description even at energies above \({\Lambda }\). Whether or not this is the case cannot be determined from the lowenergy viewpoint, such that we conservatively limit the EFT validity to energies below \({\Lambda }\).
For the purpose of direct detection constraints, the only requirement on \({\Lambda }\) is that it is larger than the hadronic scale, so that the effective operators can be written in terms of free quarks and gluons. This is the case for \({\Lambda } > rsim 2\,\text {GeV}\), which will always be satisfied in the present study. However, in order to evaluate direct detection constraints, it is necessary to determine the relic abundance of DM particles, which depends on the crosssections for the processes \(\chi \chi \rightarrow q q\) or \(\chi \chi \rightarrow g g\), just as in the case of indirect detection constraints (see Sect. 3.3). For this calculation to be meaningful in the EFT framework, we require \({\Lambda }> 2 m_\chi \). Parameter points with smaller values of \({\Lambda }\) will thus be invalidated. A dedicated study of direct detection constraints for \({\Lambda }< 2 m_\chi \) will be left for future work.
In the context of LHC searches for DM, EFT validity requires that the invariant mass of the DM pair produced in a collision satisfies \(m_{\chi \chi } < {\Lambda }\) [83]. To obtain robust constraints, only events with smaller energy transfer should be included in the calculation of likelihoods. The problem with this prescription is that \(m_{\chi \chi }\) does not directly correspond to any observable quantity (such as the missing energy of the event) and hence the impact of varying \({\Lambda }\) on predicted LHC spectra is difficult to assess. One possible way to address this issue would be to generate new LHC events for each parameter point and include only those events with small enough \(m_{\chi \chi }\) in the likelihood calculation, but this is not computationally feasible in the context of a global scan.
In the present work, we adopt the following simpler approach: Rather than comparing \({\Lambda }\) to the invariant mass of the DM pair, we compare it to the typical overall energy scale of the event, which can be estimated by the amount of missing energy produced. In other words, we do not modify the missing energy spectrum for and only apply the EFT validity requirement for larger values of . This approach is less conservative than the one advocated, for instance in Refs. [63, 64], where the energy scale of the event is taken to be the partonic centreofmass energy \(\sqrt{{\hat{s}}}\), but it has the crucial advantage that it can be applied after event generation, since the differential crosssection with respect to missing energy is exactly the quantity that is directly compared to data.^{Footnote 6}
In the following, we will consider two different prescriptions for how to impose the EFT validity. The first one is to introduce a hard cutoff, i.e. to set for . The second, more realistic, prescription is to introduce a smooth cutoff that leads to a nonzero but steeply falling missing energy spectrum above \({\Lambda }\). For this we make the replacement
for . Here a is a free parameter that depends on the specific UV completion. The limits \( a \rightarrow 0 \) and \(a \rightarrow \infty \) correspond to no truncation and an abrupt truncation above the cutoff, respectively. For the case that the EFT results from the exchange of an schannel mediator with mass close to \({\Lambda }\), one finds \(a \approx 2\) [30]. Rather than taking inspiration from a specific UV completion, we will instead keep a as a free parameter in the interval [0, 4] and find the value that gives the best fit to data at each parameter point. This approach typically leads to conservative LHC bounds in the sense that much stronger exclusions may be obtained in specific UV completions, if the heavy particles that generate the effective DM interactions can be directly produced at the LHC. However, this truncation procedure can lead to unrealistic spectral shapes with sharp features that may be tuned to fit fluctuations in the data. As will be discussed in more detail in Sect. 4, any explanation of data excesses through this approach must be interpreted with care.
Without upper bounds on the Wilson coefficients, any requirement on EFT validity could be satisfied by making both \({\Lambda }\) and the Wilson coefficients arbitrarily large. We therefore require , which is necessary for a perturbative UV completion and ensures that there is no unitarity violation in the validity range of the EFT [62].
One drawback of this prescription is that the EFT validity requirement depends on the normalisation of the effective operators. For example, we have written with a prefactor \(\alpha _s / (12\pi )\) and with a prefactor \(\alpha _s / (8\pi )\) to reflect the fact that in many UV completions, these operators would be generated at the oneloop level. If these operators are instead generated at tree level (e.g. from a strongly interacting theory), it would be more appropriate to write the prefactor as \(4\pi \alpha _s\). With the latter convention any constraint on the new physics scale \({\Lambda }\) becomes stronger by a factor \((48\pi ^2)^{1/3} \approx 5.3\) for and by a factor \((32\pi ^2)^{1/3} \approx 4.6\) for , meaning that much larger values of \({\Lambda }\) are experimentally testable and the range of EFT validity is substantially increased. We have confirmed explicitly that the results presented in Sect. 4 do not depend on the specific definition of the Wilson coefficients for .^{Footnote 7}
2.3 Parameter ranges
In this study we focus on the following parameter regions. In order to be able to neglect QCD resonances in the process \(\chi {\bar{\chi }} \rightarrow q {\bar{q}}\), we restrict ourselves to \(m_\chi > 5\,\text {GeV}\). In order to have a sufficiently large separation of scales between the new physics scale \({\Lambda }\) and the hadronic scale, we also require \({\Lambda }> 20\,\text {GeV}\). As discussed in Sect. 2.2, we furthermore impose the bound on all Wilson coefficients and the bound \({\Lambda }> 2 m_\chi \). The upper bounds on \(m_\chi \) and \({\Lambda }\) depend on the details of the scans that we perform and will be discussed in Sect. 4.
3 Constraints
In this section we describe the constraints relevant for our model. A summary of all likelihoods included in our scans is provided in Table 1. For each likelihood that directly constrains the interactions of the DM particle we also quote the backgroundonly loglikelihood \(\ln {\mathcal {L}}^{\text {bg}}\) obtained when setting all Wilson coefficients to zero. For the remaining likelihoods we instead quote the maximum achievable value of the loglikelihood \(\ln {\mathcal {L}}^{\text {max}}\). The sum of all these contributions, \(\ln {\mathcal {L}}^{\text {ideal}} = 105.3\) will be used to calculate loglikelihood differences below.
3.1 Direct detection
Direct detection experiments search for the scattering of DM particles from the Galactic halo off nuclei in an ultrapure target by measuring the energy \(E_{\text {R}}\) of recoiling nuclei. The differential event rate with respect to recoil energy is given by
where \(\rho _0\) is the local DM density, \(m_T\) is the target nucleus mass, f(v) is the local DM velocity distribution and
is the minimal DM velocity to cause a recoil carrying away a kinetic energy \(E_{\text {R}}\), where \(\mu = m_T \, m_\chi / (m_T + m_\chi )\) is the reduced mass of the DMnucleus system.
The local DM density and velocity distribution are not very well known and introduce sizeable uncertainties in the prediction of experimental signals (see the discussion of nuisance parameters in Sect. 3.6). Nevertheless, the greatest challenge in the present context is the calculation of the differential scattering crosssection \(\text {d}\sigma / \text {d}E_{\text {R}}\). For this purpose, one needs to map the effective interactions between relativistic DM particles and quarks or gluons defined above onto effective interactions between nonrelativistic DM particles and nucleons \(N = p,n\). The EFT of nonrelativistic interactions can be written as
where the operators \({\mathcal {O}}^N_i\) depend only on the DM spin \(\mathbf {S}_\chi \), the nucleon spin \(\mathbf {S}_N\), the momentum transfer \(\mathbf {q}\) and the DMnucleon relative velocity \(\mathbf {v}\) [4, 53, 99].
The nonrelativistic operators can be divided into four categories according to whether or not they depend on the nucleon spin \(\mathbf {S}_N\), such that scattering is suppressed for nuclei with vanishing spin, and whether or not they depend on \(\mathbf {q}\) and/or \(\mathbf {v}\), such that scattering is suppressed in the nonrelativistic limit. Specifically, \({\mathcal {O}}^N_1\) leads to spinindependent (SI) unsuppressed scattering, \({\mathcal {O}}^N_4\) leads to spindependent (SD) unsuppressed scattering, \({\mathcal {O}}^N_5\), \({\mathcal {O}}^N_8\), \({\mathcal {O}}^N_{11}\) lead to SI momentumsuppressed scattering and \({\mathcal {O}}^N_6\), \({\mathcal {O}}^N_7\), \({\mathcal {O}}^N_9\), \({\mathcal {O}}^N_{10}\), \({\mathcal {O}}^N_{12}\) lead to SD momentumsuppressed scattering, which is typically unobservable. For the relativistic operators included in this study we give the dominant type of interaction they induce in the nonrelativistic limit in Table 2.
The coefficients \(c_i^N(q^2)\) can be directly calculated from the Wilson coefficients of the relativistic operators at \(\mu = 2\,\text {GeV}\). The explicit dependence on the momentum transfer \(q = \sqrt{2 m_T E_{\text {R}}}\) is a result of two effects. First, under RG evolution some of the effective DMquark operators mix into the DM dipole operators (see Eq. (20)). These operators then induce longrange interactions, i.e. contributions to the \(c_i^N(q^2)\) that scale as \(q^{2}\). Since the momentum transfer can be very small in direct detection experiments, these contributions can be important in spite of their loop suppression. Second, the coefficients include nuclear form factors, obtained by evaluating expectation values of quark currents like \(\langle N'  {\overline{q}} \gamma ^\mu q  N \rangle \). These form factors can be calculated in chiral perturbation theory and exhibit a pion pole for axial and pseudoscalar currents, i.e. a divergence for \(q \rightarrow m_\pi \) [100, 101].
All of these effects are fully taken into account in \(\textsf {DirectDM} \), which calculates the coefficients \(c_i^N(q^2)\) for given Wilson coefficients at a higher scale (see App. A). These coefficients are then passed onto DDCalc v2.2.0 [52, 110], which calculates the differential crosssection for each operator \({\mathcal {O}}^N_i\) (including interference) and target element of interest. DDCalc also performs the velocity integrals needed for the calculation of the differential event rate, and the convolution with energy resolution and detector acceptance needed to predict signals in specific experiments:
where M is the detector mass, \(T_{\text {exp}}\) is the exposure time and \(\phi (E_{\text {R}})\) is the acceptance function.
By combining DirectDM and DDCalc, we can obtain likelihoods for a wide range of direct detection experiments. In the present analysis, we include constraints from the most recent XENON1T analysis [93], LUX 2016 [88], PandaX 2016 [91] and 2017 [92] analyses, CDMSlite [84], CRESSTII [85] and CRESSTIII [86], PICO60 2017 [89] and 2019 [90], and DarkSide50 [87].
The hadronic inputs to DirectDM v2.2.0 [67] were updated with the most recent \(N_f=2+1\) lattice QCD results, following the FLAG quality requirements [107], see Table 3. All the inputs are evaluated at \(\mu =2\) GeV. The hadronic matrix elements for protons and neutrons are related using isospin conservation.
For operators with vector quark currents, the least well known are the hadronic matrix elements involving the strange quark, while the matrix elements for operators with u, d quark vector currents have negligible errors to the precision we are working with. Since the strange quark vector current vanishes at \(q^2=0\), the first nonvanishing contribution is obtained only at nexttoleading order in the chiral expansion, and depends on the strange quark charge radius, \(r_s^2 = 0.0045(14)\,\)fm\(^2\) [104, 105]. For the nuclear magnetic moment induced by the strange quark, \(\mu _s= 0.036(21)\) [104, 105], we inflate the errors according to the Particle Data Group prescription.
The scalar form factors at zero recoil are obtained from expressions in Ref. [103], namely
where the upper (lower) sign is for the proton (neutron). We use a rather conservative estimate \(\sigma _{\pi N}=(50\pm 15)\) MeV [101] that covers the spread between the lattice QCD [111,112,113,114,115,116,117,118] and pionic atom determinations [113, 114, 117, 119,120,121,122,123]. The other two parameters are \(\xi \equiv (m_dm_u)/(m_d + m_u)= 0.36\, \pm 0.04\) and \(B c_5 \, (m_dm_u)=(0.51\pm 0.08)\) MeV [103].
The matrix elements of tensor currents are described by three sets of form factors, but only two, \(g_{T}^{q}\) and \(B_{T,10}^{q/N} (0)\), enter the chirally leading expressions. For \(g_{T}^{q}\), the only \(N_f=2+1\) result from Ref. [117] does not satisfy the FLAG quality requirements, so we use the \(N_f=2+1+1\) results from Ref. [106] instead; the difference between the \(N_f=2+1\) and \(N_f=2+1+1\) results is expected to be small. For \(B_{T,10}^{q/N}(0)\), we use the results from the constituent quark model in Ref. [109].
3.2 Relic abundance of DM
The Early Universe time evolution of the number density of the \(\chi \) particles, \(n_\chi \), is governed by the Boltzmann equation [124]
where \(n_{\chi ,\text {eq}}\) is the number density in equilibrium, H(t) is the Hubble rate and \(\langle \sigma v_{\text {{rel}}}\rangle \) is the thermally averaged crosssection times the relative (Møller) velocity, given by
where \(K_{1,2}\) are the modified Bessel functions and \(v_{\mathrm{lab}}\) is the velocity of one of the annihilating (anti)DM particles in the rest frame of the other (for a discussion, see also Ref. [125]). We stress that there is no additional factor of 1/2 in the above equations. However, the fact that DM consists of Dirac particles implies that the total contribution to the observed DM density is given by \(n_\chi +n_{{{\bar{\chi }}}}=2n_\chi \) (disregarding the possibility of an initial asymmetry [126]).
We compute treelevel annihilation crosssections using CalcHEP v3.6.27 [127, 128], where the implementation of the fourfermion interactions is generated by GUM [55, 56] from UFO files via the tool ufo_to_mdl (described in Appendix B). To ensure the EFT picture is valid, we invalidate points where \({\Lambda }\le 2 m_\chi \). We obtain the relic density of \(\chi \) by numerically solving Eq. (28) at each parameter point, assuming the standard cosmological history^{Footnote 8} and using the routines implemented in DarkSUSY v6.2.2 [130, 131] via DarkBit. We then compare the prediction to the relic density constraint from Planck 2018: \({\varOmega }_{\text {DM}}\,h^2 = 0.120 \pm 0.001\) [98]. We include a \(1\%\) theoretical error on the computed values of the relic density, which we combine in quadrature with the observed error on the Planck measured value. More details on this prescription can be found in Refs. [48, 52].
We note that our uncertainty estimate does not include uncertainties in the calculation of the annihilation crosssection very close to quark thresholds, which may be considerably larger. Moreover, our approach does not capture the potential effect of additional degrees of freedom on \(\langle \sigma v_\text {rel}\rangle \) during freezeout. The resulting effects, such as resonances or coannihilations could both increase and decrease the resulting value of \({\varOmega }_\chi \) (see e.g. Refs. [132, 133]), so the relic density constraint should be interpreted with care for \({\Lambda }\sim 2m_\chi \), i.e. close to the EFT validity boundary (see Sect. 2.2).
The very nature of the EFT construction implies additional degrees of freedom above the energy scale \({\Lambda }\). Given the potential for a rich dark sector containing \(\chi \), and in particular, the possibility of additional DM candidates not captured by the EFT, we will by default not demand that the particle \(\chi \) constitutes all of the observed DM, i.e. we allow for the possibility of other DM species to contribute to the observed relic density. In practice, this means that we modify the relic density constraint in such a way that the likelihood is flat if the predicted value is smaller than the observed one. In this case, we rescale all predicted direct and indirect detection signals by
and \(f_\chi ^2\), respectively. In doing so, we assume that the fraction \(f_\chi \) is the same in all astrophysical systems and that any additional DM population does not contribute to signals in these experiments. In a second set of scans we then impose a stricter requirement, namely that the DM particle under consideration saturates the DM relic abundance (\(f_\chi \approx 1\)) rather than imposing the relic density as an upper bound (\(f_\chi \le 1\)).^{Footnote 9}
3.3 Indirect detection with gamma rays
If DM is held in thermal equilibrium in the early universe via collisions with SM particles, then it can still annihilate today, especially in regions of high DM density. As with the relic abundance calculation, in order for the effective picture to hold for DM annihilation, we must impose \({\Lambda }> 2 m_\chi \).
Gamma rays from dwarf spheroidal galaxies (dSphs) are a particularly robust way of constraining annihilation signals from DM [134]. In general, for a given energy bin i, the DMinduced \(\gamma \)ray flux from target k can be written in the factorised form \({\varPhi }_i \cdot J_k\), where details of the particle physics processes are encoded in \({\varPhi }_i\), and details of the astrophysics are encoded in \(J_k\). See the DarkBit manual [52] for more details.
In general, only operators that lead to swave annihilation () give rise to observable gammaray signals; see for instance, Table 2. For the operators and , the leading contribution to the annihilation crosssection is pwave suppressed, i.e. proportional to \(v_{\text {rel}}^2\). As DM in dSphs is extremely cold, with \(\langle v^2\rangle ^{1/2}\sim 10^{4}\), this factor is very small, and the resulting limits are exceedingly weak. We therefore neglect pwave contributions to all annihilation processes here.
For swave annihilation, one obtains
where \(f_\chi \) is the DM fraction defined in Eq. (30), \((\sigma v)_{0,j}\) denotes the zerovelocity limit of the crosssection for \(\chi {{\bar{\chi }}}\rightarrow j\) and \(N_{\gamma ,j}\) is the number of photons, per annihilation, resulting from the final state channel j. The prefactor 1/4 accounts for the Dirac nature of the DM particles (under the assumption that \(n_\chi =n_{{{\bar{\chi }}}}\)). Again, we use CalcHEP to compute annihilation crosssections, with the CalcHEP model files generated by ufo_to_mdl via GUM (see Appendix B). The photon yields \({dN_{\gamma ,j}}/{dE}\) used in DarkBit are based on tabulated Pythia runs, as provided by DarkSUSY.
The Jfactor for each dSph k is simply the lineofsight integral over the DM distribution assuming an NFW density profile and the solid angle \({\varOmega }\),
where \(D_k\) is the distance to the dSph. In our analysis we use the Pass8 combined analysis of 15 dSphs after 6 years of FermiLAT data [96]. We use the gamLike v1.0.1 interface within DarkBit [52] to compute the likelihood for the gammaray observations, \(\ln {\mathcal {L}}_{\mathrm{exp}}\), constructed from the product \({\varPhi }_i \cdot J_k\) and summed over all targets and energy bins,
We also include a contribution from profiling over the Jfactors of each dSph, \(\ln {\mathcal {L}}_J = \sum _k \ln {\mathcal {L}}(J_k)\) [52, 96], such that the full likelihood reads
Gamma rays from the Galactic centre region provide a promising complementary way of constraining a signal from annihilating DM. While the Jfactor is expected to be significantly higher than for dSphs, however, this conclusion is largely based on the result of numerical simulations of gravitational clustering rather than on the direct analysis of kinematical data. The reason for this is that the gravitational potential within the solar circle is dominated by baryons, not by DM, which adds additional uncertainty due to a dominant component of astrophysical gamma rays from this target region. As a result, Galactic centre observations with FermiLAT are somewhat less competitive than the dSph limits discussed above [135]. The upcoming Cherenkov Telescope Array (CTA), on the other hand, has a good chance of probing thermally produced DM up to particle masses of several TeV [136]. We will not include the projected CTA likelihoods in our scans, but indicate the reach of CTA when discussing our results.
3.4 Other indirect detection constraints
3.4.1 Solar capture
The presence of nonzero elastic scattering crosssections with nuclei combined with selfannihilation to heavy SM states leads to an additional, unique signature of DM in the form of highenergy neutrinos from the Sun. If Milky Way DM scatters with solar nuclei and loses enough kinetic energy to fall below the local escape velocity, it will become gravitationally bound. As long as it is above the evaporation mass threshold \(\simeq 4\) GeV, captured DM will thermalize in a small region near the solar centre, and annihilate to SM products which then produce neutrinos via regular decay processes. These are distinct from the neutrinos from Solar fusion, as they are expected to have much higher energies than the \(\sim \) MeV scales of fusion processes. Leading constraints have been obtained by SuperKamiokande down to a few GeV [137], and by the IceCube South Pole Neutrino Observatory, between 20 and 10\(^4\) GeV [138]. For typical annihilation crosssections, the captured DM population reaches an equilibrium that is determined by the capture rate. For each likelihood evaluation, we obtain the nonrelativistic effective operators (Eq. (25)) as described in Sect. 3.1, using DirectDM to obtain the nonrelativistic Wilson coefficients. These are passed to the public code Capt’n General [139], which computes the DM capture rate via the integral over the solar radius r and DM halo velocity u:
where \(w(r) = \sqrt{u^2 + v^2_{{\text {esc}},\odot }(r)}\) is the DM velocity at position r, and
is the probability of scattering from velocity w to a velocity less than the local Solar escape velocity \(v_{\text {esc},\odot }(r)\), \({d\sigma _{i}}/{dE_{\text {R}}}\) is the DMnucleus scattering crosssection, \(n_i(r)\) is the number density of species i with atomic mass \(m_{N,i}\), and \(\mu _i = m_\chi /m_{N,i}\). Version 2.1 of Capt’n General uses the method described in detail in Ref. [140], separating the DMnucleus crosssection into factors proportional to nonrelativistic Wilson coefficients, powers of w and exchanged momentum q, and operatordependent nuclear response functions computed in Ref. [140] for the 16 most abundant elements in the Sun. Solar parameters are based on the Barcelona Group’s AGSS09ph Standard Solar Model [141, 142].
Annihilation crosssections are computed as described in Sect. 3.2, via CalcHEP. Once the equilibrium population of DM in the Sun has been obtained, crosssections and annihilation rates are passed to DarkSUSY, which computes the neutrino yields as a function of energy. These are finally passed to nulike v1.0.9 [143, 144], which computes eventlevel likelihoods based on a reanalysis of the 79string IceCube search for DM annihilation in the Sun [97].
3.4.2 Cosmic microwave background
Additional constraints on the DM annihilation crosssection arise from the early universe, more specifically from observations of the Cosmic Microwave Background (CMB). Annihilating DM particles inject energy into the primordial plasma, which affects the reionisation history and alters the optical depth \(\tau \). The magnitude of this effect depends on the specific annihilation channel and how efficiently the injected energy is deposited. These details can be encoded in an effective efficiency coefficient \(f_{\text {eff}}\), which depends on the injected yields of photons, electrons and positrons, and thus on the DM mass and its branching ratios into different final states [145]. The CMB is then sensitive to the following parameter combination:
where \(\langle \sigma v_{\text {rel}} \rangle \approx (\sigma v)_0\) to a very good approximation during recombination; we thus also neglect pwave contributions to all annihilation processes here.
In order to calculate \(p_{\text {ann}}\) for a given parameter point, one first needs to calculate the injected spectrum of photons, electrons and positrons and then convolve the result with suitable transfer functions that link the energy injection rate to the energy deposition rate [146]. The first part of this calculation has been automated within DarkSUSY and is accessible via DarkBit. The second part relies on DarkAges [147] (which is part of the ExoCLASS branch of CLASS) and is accessible via CosmoBit [54], see Appendix C for further details.^{Footnote 10}
As the Planck collaboration only quotes the 95% credible interval for \(p_{\text {ann}}\) [98], the remaining challenge is to obtain a likelihood for \(p_{\text {ann}}\) from cosmological data. Although this likelihood can, in principle, be calculated for each parameter point individually using the CosmoBit interface to CLASS and the Planck likelihoods, carrying out such a large number of calculations would be prohibitively slow, in particular if the cosmological parameters of the \({\Lambda }\)CDM model are to be varied simultaneously. In the present work, we therefore adopt a simpler approach, where we first calculate the likelihood when varying \(p_{\text {ann}}\) (while profiling over the \({\Lambda }\)CDM and cosmological nuisance parameters). This approach yields
where \(p_{\text {ann}}^{28} \equiv p_{\text {ann}} / \left( 10^{28} \, \text {cm}^3~\text {s}^{1}~\text {GeV}^{1} \right) \). In arriving at this result, we have included the Planck TT,TE,EE+lowE+lensing likelihoods (using the ‘lite’ likelihood for multipoles \( \ell \ge 30 \), which only require one additional nuisance parameter [148]), as well as the BAO data of 6dF [149], SDSS DR7 MGS [150], and the SDSS BOSS DR12 galaxy sample [151]. This profile likelihood, which reproduces the 95% credible interval obtained by the Planck collaboration [98], can then be used in all subsequent scans, so that only \(p_{\text {ann}}\) needs to be calculated for each parameter point and it is no longer necessary to call CLASS or plc.
3.4.3 Charged cosmic rays
Finally, DM particles annihilating in the Galactic halo also produce positrons, antiprotons and, to a lesser degree, heavier antinuclei that could in principle be observed in the spectrum of charged cosmic rays. Positrons quickly lose their energy through synchrotron radiation, and are thus a robust probe of exotic contributions from the local Galactic environment; the resulting bounds on DM annihilating to quarks or gluons are, however, much weaker than the other indirect detection constraints discussed here [152].^{Footnote 11} Antinuclei, on the other hand, probe a significant fraction of the entire Galactic halo because energy losses are much less efficient in this case. For antiprotons, this generally leads to competitive constraints on DM annihilation signals [155,156,157], but it also means that such bounds necessarily strongly depend on uncertainties relating to modelling the production and propagation of cosmic rays in the Galactic halo. In addition to the dozen (or more) free parameters in the diffusionreacceleration equations, there exist significant uncertainties on the energy dependence of the nuclear crosssections responsible for the conventional antiproton flux [158] and possible correlated systematics [159]. A full statistical analysis, which would require a treatment of the large number of (effective) propagation parameters as nuisance parameters in our scans, is prohibitive in terms of computational costs [160] and hence beyond the scope of this work.
3.5 Collider physics
The effective operators defined in Sect. 2 allow for the pair production of WIMPs in the proton–proton collisions at the LHC. If one of the incoming partons radiates a jet through initial state radiation (ISR), one can observe the process \(pp \rightarrow \chi \chi j\) as a single jet associated with missing transverse energy (). In this study, we include the CMS [95] and ATLAS [94] monojet analyses based on \(36\,\mathrm {fb}^{1}\) and \(139\,\mathrm {fb}^{1}\) of data from Run II, respectively. ATLAS and CMS have performed a number of further searches for other types of ISR, leading for example to monophoton signatures, but these are known to give weaker bounds on DM EFTs than monojet searches [24, 161, 162].
The expected number of events in a given bin of the distribution is
where \(L =36\,{\text {fb}}^{1}\) or \(139\,{\text {fb}}^{1}\) is the total integrated luminosity, \(\sigma \) the total production crosssection and the factor \((\epsilon A)\) is the efficiency times acceptance for passing the kinematic selection requirements for the analysis. Both \(\sigma \) and \((\epsilon A)\) can be obtained via Monte Carlo simulation, but given the dimensionality of the DM EFT parameter space it is computationally too expensive to perform these simulations on the fly during the parameter scan, as would be the standard approach to collider simulations within ColliderBit in GAMBIT.
Starting from UFO files generated using FeynRules v2.0 [163], we have therefore produced separate interpolations of \(\sigma \) and \(\epsilon A\) based on the output of Monte Carlo simulations with MadGraph_aMC@NLO v2.6.6 [164] (v2.9.2) for the CMS (ATLAS) analysis, interfaced to Pythia v8.1 [165] for parton showering and hadronization. The matching between MadGraph and Pythia is performed according to the CKKW prescription, and the detector response is simulated using Delphes v3.4.2 [166]. The ColliderBit code extension that enables \(\sigma \) and \((\epsilon A)\) interpolations to be used as an alternative to direct Monte Carlo simulation will be generalised and documented in the next major version of ColliderBit.
We only include the dimension6 and 7 EFT operators ( and ) which are relevant for collider searches. Other operators give a negligible contribution due to either being suppressed by the parton distribution functions (in the case of heavy quarks), or by a factor of the fermion mass (small in the case of light quarks).
To reduce the computation time for our study, we generate events in discrete grids of the Wilson coefficients and DM mass. Separate grids are defined for each set of operators that do not interfere, such that the total number of events will simply be the sum of the contributions calculated from each grid. At dimension6, there is interference between operators / and /. For these Wilson coefficients, we parametrize the tabulated grids in terms of a mixing angle \(\theta \), defined via and .
The CMS and ATLAS analyses have 22 and 13 exclusive signal regions, respectively, corresponding to the individual bins in the missing transverse energy distributions. As discussed below, the publicly available information makes it possible to combine all signal regions for the CMS analysis, while for the ATLAS analysis, only a single signal region can be used at once. To maximize the sensitivity of the ATLAS analysis, we combine the three highest missing energy bins, for which systematic uncertainties in the background estimation (and hence their correlations) are negligible, such that the highest bin in our analysis corresponds to all events with .^{Footnote 12} Once the predicted yields for all bins have been evaluated, taking into account the EFT validity constraint as described in Sect. 2.2, we compute a likelihood for each analysis as follows.
For the CMS analysis, we follow the “simplified likelihood” method [167], since the required covariance matrix was published by CMS. In this approach, the full experimental likelihood function is approximated by a standard convolved Poisson–Gaussian form, with the systematic uncertainties on the background predictions treated as correlated Gaussian distributions:
For each signal region i, the observed yield, expected signal yield and expected background yield are given by \(n_i\), \(s_i\) and \(b_i\), respectively. The deviation from the nominal expected yield due to systematic uncertainties is given by \(\gamma _i\). The correlations between the different \(\gamma _i\) are encoded in the covariance matrix \({\varvec{{\varSigma }}}\) provided by CMS, where we also add the signal yield uncertainties in quadrature along the diagonal. We follow the procedure in Ref. [167] in treating the \(\gamma _i\) nuisance parameters as linear corrections to the expected yields. For every point in our scans of the DM EFT parameter space, we profile Eq. (39) over the 22 nuisance parameters in \({\varvec{\gamma }}\) to obtain a likelihood solely in terms of the set of DM EFT signal estimates \({\varvec{s}}\):
In the case of the ATLAS analysis, for which such a covariance matrix is not available, the conservative course of action is to calculate a likelihood using only the signal region with the best expected sensitivity. The ATLAS likelihood is therefore given by
where \({\mathcal {L}}_{{\text {ATLAS}}}(s_i, \hat{\hat{\gamma _i}})\) is the singlebin equivalent of Eq. (39), and i refers to the signal region with the best expected sensitivity, i.e. the signal region that would give the lowest likelihood in the case \(n_i = b_i\).
The total LHC loglikelihood is then given by \(\ln {\mathcal {L}}_{{\text {LHC}}} = \ln {\mathcal {L}}_{{\text {CMS}}} + \ln {\mathcal {L}}_{{\text {ATLAS}}}\). However, due to the perpoint signal region selection required in the evaluation of \(\ln {\mathcal {L}}_{{\text {ATLAS}}}\), the variation in typical yields between the different signal regions would manifest as a large variation in the effective likelihood normalization between different parameter points. To avoid this we follow the standard approach in ColliderBit of using the loglikelihood difference
as the LHC loglikelihood contribution in the parameter scan [168].
When presenting the results of a global fit we identify the maximumlikelihood point \({\varvec{{\Theta }}}_{\text {bestfit}}\) in the DM EFT parameter space and map out the \(1\sigma \) and \(2\sigma \) confidence regions defined using the likelihood ratio \({\mathcal {L}}({\varvec{{\Theta }}}) / {\mathcal {L}}({\varvec{{\Theta }}}_{\text {bestfit}})\). Thus, in cases where some region of the DM EFT parameter space can accommodate a modest excess in the collider data, other DM EFT parameter regions that might still perform better than the SM, or that are experimentally indistinguishable from SM, can appear as excluded. While this is perfectly reasonable, given that the comparison is to the bestfit DM EFT point and not to the SM expectation, it is also interesting to study the global fit results under the assumption that mild excesses in the collider data indeed do not originate from a true new physics signal. A simple and pragmatic approach is then to replace \({\varDelta } \ln {\mathcal {L}}_{{\text {LHC}}}\) with a capped version,
This will assign the same loglikelihood value, \({\varDelta } \ln {\mathcal {L}}_{{\text {LHC}}}^{\text {cap}} = 0\), for all DM EFT parameter points whose prediction fit the collider data as well as, or better than, the SM prediction (\(\mathbf{s} = \mathbf{0}\)) does. Thus, analogous to how exclusion limits from LHC searches are constructed to only exclude new physics scenarios that predict too many signal events, the capped likelihood only penalizes parameter points for performing worse than the backgroundonly scenario. The result obtained from using \({\varDelta } \ln {\mathcal {L}}_{{\text {LHC}}}^{\text {cap}}\) in a fit is therefore close to the result one would obtain by constructing a joint exclusion limit for the LHC searches, and applying this limit as a hard cut on the parameter space favoured by the other observables. The main difference is that the capped LHC likelihood incorporates a continuous likelihood penalty.^{Footnote 13} A more detailed introduction to the capped likelihood construction can be found in Ref. [169].
Below we will present some results using this capped LHC likelihood, and some using the full LHC likelihood in Eq. (42). In light of the discussion above, the two sets of results should be interpreted as answering slightly different questions: The fit results with the full LHC likelihood show what DM EFT scenario is in best agreement with the complete set of current data, and how much worse other DM EFT scenarios perform in comparison. The results with the capped LHC likelihood map out the DM EFT parameter space that is preferred by the noncollider observables and not excluded by a combination of the LHC searches.
3.6 Nuisance parameter likelihoods
In our scans we also vary a set of relevant nuisance parameters related to the DM observables and SM measurements. Most of these nuisance parameters are directly constrained by dedicated measurements, which we include through appropriate likelihood functions. In some cases, however, several conflicting measurements exist, indicating additional systematic uncertainties in the methodology. In these cases we constrain the nuisance parameters through effective likelihoods intended to give a conservative constraint on the allowed ranges. The nuisance parameters and \(3\sigma \) ranges used in this study are summarised in Table 4. We briefly cover each nuisance likelihood in turn below.
We follow the default prescription in DarkBit for the local DM density \(\rho _0\), where the likelihood is given by a lognormal distribution with central value \(\rho _0 = 0.40\) GeV cm\(^{3}\) and error \(\sigma _{\rho _0}=0.15\) GeV cm\(^{3}\). We scan over an asymmetric range in \(\rho _0\) to reflect the lognormal distribution – see Ref. [52] for more details.
We follow the same treatment of the Milky Way halo as in the GAMBIT Higgs portal study [110]. We utilise Gaussian likelihoods for parameters describing the MaxwellBoltzmann velocity distribution, specifically the peak of the distribution \(v_{\mathrm{peak}} = 240\, \pm \,8\) km s\(^{1}\) [170], and the galactic escape velocity \(v_{\mathrm{esc}} = 528 \pm 25\) km s\(^{1}\), based on the Gaia data [171].
We employ a Gaussian likelihood for the running top quark mass in the \(\overline{\text {MS}}\) scheme with a central value \(m_t (m_t) = 162.9\) GeV and an error 2.0 GeV [172].^{Footnote 14} The top pole mass \((m_t^\text {pole})\) is then computed using the following formula:
We use only the oneloop QCD corrections in this shift in order to be consistent with the procedure carried out in Ref. [172]. We have checked that the above expression gives the expected result for the top pole mass and matches well with Ref. [172].
For direct detection, we employ nuisance parameter likelihoods for a number of hadronic input parameters that are used to evaluate form factors at the nuclear scale. Specifically, we use a product of four Gaussian likelihoods to include the constraints on \(\sigma _{\pi N}\), \({\varDelta } s\), \(g_T^s\) and \(r_s^2\) quoted in Table 3. The remaining hadronic input parameters are fixed to the central values given in Table 3.
4 Results
We now present the results obtained from comprehensive scans of the parameter space introduced above. These scans were carried out with the differential evolution sampler Diver v1.4.0 [173] using a population of \(5 \times 10^4\) and a convergence threshold of either \(10^{5}\) or \(3 \times 10^{5}\). As we will analyse our scan results using profile likelihood maps, the sole aim of the scans is to map out the likelihood function in sufficient detail across the highlikelihood regions of parameter space. In particular, no statistical interpretation is associated with the density of parameter samples, and we can therefore combine samples from scans that use different metrics on the parameter space. To ensure that all parameter regions are properly explored, we perform two different types of scans:

Full: We explore DM masses up to the unitarity bound (\(5 \, \text {GeV}< m_\chi < 150\,\text {TeV}\) and \(20 \, \text {GeV}< {\Lambda }< 300 \, \text {TeV}\)).^{Footnote 15} In these scans, \(m_\chi \) and \({\Lambda }\) are scanned on a logarithmic scale, while the Wilson coefficients are scanned on both a linear and a logarithmic scale (i.e. we combine the samples from both scanning strategies to achieve a thorough exploration of the whole parameter space).

Restricted: We consider the parameter region where experimental constraints are most relevant (\(m_\chi < 500\,\text {GeV}\) and \({\Lambda }< 2 \, \text {TeV}\)). In these scans the DM mass is scanned on a linear scale, the scale \({\Lambda }\) on a logarithmic scale and the Wilson coefficients on a scale that is logarithmic on \([4\pi ,10^{6}]\), linear on \([10^{6},10^{6}]\) and logarithmic on \([10^{6},4\pi ]\). This approach was found to achieve the optimum resolution of the LHC constraints while simultaneously ensuring that enough viable samples are also found for small \({\Lambda }\) when some or all of the Wilson coefficients are tightly constrained.
All nuisance parameters are scanned on a linear scale. In the first set of scans, we fix the Wilson coefficients for all dimension7 operators to zero, so that there are 6 model parameters and 8 nuisance parameters. The second set of scans then includes all 14 Wilson coefficients, bringing the total number of parameters up to 24.
We furthermore consider a number of variations in the constraints that we include in our scans:

We perform scans where the DM particle is allowed to be a subcomponent (\(f_\chi \le 1\)) and scans where we require that the DM relic density be saturated (\(f_\chi \approx 1\)), see Sect. 3.2;

We perform scans with both the capped LHC likelihood and the full LHC likelihood (see Sect. 3.5);

When considering the full LHC likelihood, we furthermore apply two different prescriptions for imposing the EFT validity: a hard cutoff and a smooth cutoff (see Sect. 2.2).
Unless explicitly stated otherwise, our default choices for the discussion below are to allow a DM subcomponent and consider the capped LHC likelihood with a hard cutoff.
4.1 Capped LHC likelihood
Let us begin with the case that the LHC likelihood is capped, i.e. it cannot exceed the likelihood of the backgroundonly hypothesis. We first consider only dimension6 operators with different requirements for the DM relic density, and then also include dimension7 operators.
4.1.1 Dimension6 operators only (relic density upper bound)
Our main results for this case are shown in Fig. 2 in terms of the DM mass and the new physics scale \({\Lambda }\). The left panel corresponds to the full parameter range, whereas the right panel provides a closer look at the most interesting parameter region. We find a large viable parameter space but also a number of notable features. For large values of \(m_\chi \) and \({\Lambda }\), the allowed parameter space is determined by the EFT validity requirement \({\Lambda }> 2 m_\chi \) and the relic density requirement which, combined with the perturbativity bound on the Wilson coefficients, implies an upper bound on \({\Lambda }\) for given \(m_\chi \). These different constraints are compatible only for \(m_\chi < 150 \, \text {TeV}\), implying an upper bound on the scale of new physics of \({\Lambda }< 300 \, \text {TeV}\). This limit corresponds to the wellknown unitarity bound for thermal freezeout [65].
The zoomedin version in the right panel reveals a number of additional features. In the topleft corner (small \(m_\chi \), large \({\Lambda }\)), there are strong constraints from the LHC, which make it impossible to satisfy the relic density requirement. These constraints become weaker as \({\Lambda }\) decreases and the EFT can only be trusted for smaller values of . The various sharp features correspond to the points where \({\Lambda }\) crosses the boundary of a specific bin, leading to a jump in the likelihood. In our conservative approach, LHC constraints are completely absent for \({\Lambda }< 200 \, \text {GeV}\). Finally, we find that there is a slight upward fluctuation in FermiLAT data, which can be fitted for \(m_\chi = 5.0 \,\text {GeV}\) and \(f_\chi ^2 \langle \sigma v \rangle _0 = 1.1 \times 10^{27} \, \text {cm}^3~ \text {s}^{1}\).^{Footnote 16}
We emphasize that a great advantage of our approach is that we treat the newphysics scale \({\Lambda }\) as an independent parameter, which is kept explicit in Fig. 2 (rather than being profiled out like the individual Wilson coefficients). This makes it possible in a straightforward way to distinguish those parameter regions where the EFT predictions can be considered robust and those parameter regions where additional constraints may apply. As discussed in Sect. 2.2, the EFT is expected to be valid if \({\Lambda }\) is sufficiently greater than the largest \(p_T\) bin considered in the LHC analyses, i.e. \({\Lambda }> 1.3 \, \text {TeV}\). Conversely, for \({\Lambda }< 200 \, \text {GeV}\) we conservatively suppress constraints from the LHC, such that the viable parameter regions found in this range must be interpreted with great care. For intermediate values of \({\Lambda }\), LHC constraints are being applied but may depend on the specific UV completion. Which of these parameter regions is considered most interesting depends on the specific context and is left to the reader.
A complementary perspective is provided in Fig. 3, which shows the allowed parameter regions in terms of the DM mass, the relic density and the rescaled annihilation crosssection. A number of additional features become apparent in these plots. First, for \(m_\chi \lesssim 100\,\text {GeV}\) it is impossible to saturate the observed DM relic density, \({\varOmega }_{\text {DM}} h^2 = 0.12\), due to the combined constraints from direct and indirect detection experiments. However, these constraints are suppressed for DM subcomponents, such that it is possible to have very small relic densities in this mass region. For \(m_\chi > 100 \, \text {GeV}\) (corresponding to \({\Lambda }> 200 \, \text {GeV}\)), on the other hand, constraints from the LHC become relevant, which are not suppressed for DM subcomponents. These constraints are then again relaxed for \(m_\chi > rsim 1 \, \text {TeV}\) as the LHC energy becomes insufficient to produce a pair of DM particles.
For \(m_\chi \lesssim 1 \, \text {TeV}\), we find that there is a direct correspondence between \({\varOmega }_\chi h^2\) and the rescaled annihilation crosssection \(f_\chi ^2 \langle \sigma v \rangle _0\). This is because the operators that induce pwave annihilations (in particular \({\mathcal {C}}^{(6)}_2\)) are strongly constrained by the LHC and direct detection experiments, and the annihilation crosssection is therefore always dominated by the swave contribution. For larger DM masses, it becomes possible for the pwave contribution to dominate the relic density calculation, such that the total annihilation crosssection is velocitydependent and becomes tiny in the present universe. While indirect detection experiments presently cannot probe the relevant parameter space for TeVscale DM, it is worth stressing that CTA will be able to do so for operators that induce swave annihilation. We illustrate this in Fig. 3 by indicating the sensitivity of CTA to a DM signal from the galactic center [136] (for simplicity based on the assumption of \(b{\bar{b}}\) final states, noting that any hadronic DM annihilation channel results in very similar gammaray spectra at these energies). We note that the CTA sensitivity indicated in Fig. 3 is based on assuming a standard Einasto profile as expected for WIMP DM; if the DM density in the galactic center is instead roughly constant, the sensitivity can worsen by up to about one order of magnitude [136].
Let us finally consider the allowed parameter space in terms of the Wilson coefficients. The coefficient \({\mathcal {C}}^{(6)}_1\) gives rise to spinindependent scattering, which is very strongly constrained by direct detection experiments. Thus, this coefficient is required to be so small that it cannot give a sizeable contribution to any other process. The coefficient \({\mathcal {C}}^{(6)}_4\), on the other hand, gives rise to spindependent interactions, for which constraints are significantly weaker. We show the allowed parameter regions for this coefficient in the left panel of Fig. 4. The observed mirror symmetry results from the fact that all experimental predictions (and hence the likelihoods) are invariant under a global sign change of all Wilson coefficients. For \({\Lambda }< 200\,\text {GeV}\), all constraints are furthermore invariant under the rescaling \({\mathcal {C}} \rightarrow \alpha ^2 {\mathcal {C}}\), \({\Lambda }\rightarrow \alpha {\Lambda }\), which explains why the allowed parameter region grows with increasing \({\Lambda }\). For \({\Lambda }> 200 \, \text {GeV}\), LHC constraints become relevant and strongly constrain the magnitude of the coefficient. Very similar results are obtained for the coefficient \({\mathcal {C}}^{(6)}_2\), which gives rise to a momentumsuppressed spinindependent scattering (see Table 2).
In the right panel of Fig. 4, we show the allowed parameter region in terms of \({\mathcal {C}}^{(6)}_3\), which induces scattering that is simultaneously momentumsuppressed and spindependent, such that direct detection constraints are very weak. Correspondingly, we find that this coefficient is largely unconstrained for \({\Lambda }< 200\,\text {GeV}\). We also identify this coefficient as giving the main contribution for fitting the FermiLAT excess. For larger values of \({\Lambda }\), on the other hand, the constraints are very similar to the ones for \({\mathcal {C}}^{(6)}_{2,4}\) as the LHC only has limited sensitivity to distinguish the spin structure of the operators.
4.1.2 Dimension6 operators only (relic density saturated)
Next we consider the case where the relic density constraint is imposed not only as an upper limit but as an actual measurement, i.e., the DM particle under consideration is required to account for all of the DM in the universe via the effective interactions that we consider. We show in Fig. 5 the allowed parameter space in the restricted \(m_\chi \)–\({\Lambda }\) plane when considering a capped LHC likelihood, i.e. the same likelihoods as in Fig. 2 apart from the modified relic density requirement. As expected from the top row of Fig. 3, it is not possible to saturate the observed relic density for \(m_\chi \lesssim 100 \, \text {GeV}\). The reason is that for such small DM masses the relic density requirement is incompatible with FermiLAT and CMB bounds on the annihilation crosssection for operators that predict dominantly swave annihilation ( and ), and incompatible with direct detection and LHC constraints for and .
Constraints from direct and indirect detection experiments are also responsible for the preference for larger DM masses visible in Fig. 5. In particular, the FermiLAT likelihood pushes the bestfit point towards the boundary \(m_\chi = 500 \, \text {GeV}\). We find the likelihood of the bestfit point to be slightly worse than for the backgroundonly hypothesis: \(2 {\varDelta } \ln {\mathcal {L}} \equiv 2(\ln {\mathcal {L}}^{\text {bestfit}}  \ln {\mathcal {L}}^{\text {ideal}}) = 0.5\). Extending the range of the scan to larger DM masses would allow the model to fully evade the FermiLAT constraint. This would shift the bestfit point and the allowed parameter regions to slightly larger DM masses without changing the remaining conclusions (see also Fig. 3).
For a complementary view of the parameter space, we show in Fig. 6 the predicted number of signal events in the nextgeneration direct detection experiment LZ [174] as a function of the DM mass. Due to the various different operators contributing to the DMnucleus scattering, the predicted number of signal events is a more useful quantity to consider than the DMnucleon scattering crosssection at zero momentum transfer. The predicted number of events corresponds to nuclear recoil energies in the search window \([6 \, \text {keV}, 30 \, \text {keV}]\) and assumes an exposure of \(5.6 \times 10^6 \, \text {kg\,days}\) and 50% acceptance for nuclear recoils (see Ref. [110] for details on our implementation of LZ).
We find that of the order of 10 events are predicted around the bestfit point, which requires a nonzero contribution from the operator leading to spinindependent (but momentumsuppressed) scattering. However, the predicted number of events varies significantly within the allowed region of parameter space and can be as small as 0.1 at 68% confidence level. In this case the main contribution arises from the mixing of the operator into the operator as given in Eq. (21).^{Footnote 17} While such an event number is too low to be detected with nextgeneration experiments, it is still well above the neutrino background and should be observable with more ambitious future detectors such as DARWIN [175] or DarkSide20k [176].
Another interesting approach would be to not perform a relic density calculation at all and simply assume that \(f_\chi = 1\) is achieved through some modification of early universe cosmology. In this case it would also be possible to consider \({\Lambda }< 2 m_\chi \) since the calculation of the annihilation crosssection is unnecessary. However, since none of the other likelihoods that we consider give a strong preference for a DM signal, there would then be no lower bound on the interaction strength of the DM particle, i.e. it would be possible for all Wilson coefficients to vanish simultaneously. Hence we expect all combinations of \(m_\chi \) and \({\Lambda }\) to be viable in this approach, and we do not explore this direction further in the present work.
4.1.3 Operators up to dimension 7 (relic density upper bound)
We now turn to the case where we simultaneously consider all dimension6 operators as well as the dimension7 operators involving DM particles and quarks or gluons introduced in Sect. 2. We remind the reader that we neglect additional dimension7 operators involving Higgs bosons that would arise in theories respecting unbroken electroweak symmetry (which are phenomenologically irrelevant) as well as operators with derivative interactions (which largely give redundant information). Even with these restrictions our analysis requires 24dimensional (16 model + 8 nuisance) parameter scans.
In Fig. 7, we show the allowed regions in the \(m_\chi \)\({\Lambda }\) plane (left) and in the \(m_\chi \)–\({\varOmega }_\chi h^2\) plane (right) when using the capped LHC likelihood. As before, we find that the parameter region at small \(m_\chi \) and \({\Lambda }\) can fit the slight FermiLAT excess with bestfit values: \(m_\chi = 5.5\) GeV and \(f_\chi ^2 \langle \sigma v \rangle _0 = 1.9 \times 10^{27}\) cm\(^{3}\) s\(^{1}\).
As the inclusion of additional parameters can only increase the profile likelihood, we expect the allowed regions of parameter space to be larger than the ones found above. Interestingly, the differences between the left panel of Fig. 7 and the right panel of Fig. 2 are rather minimal. In other words, the inclusion of the 10 additional dimension7 operators does not open up new parameter space in terms of \(m_\chi \) and \({\Lambda }\). This is of course expected for the parameter region with large \(m_\chi \) and small \({\Lambda }\) (bottomright), which is excluded by the EFT validity constraint but surprising for the region with small \(m_\chi \) and large \({\Lambda }\) (topleft), which is excluded by the combination of the LHC constraints and the relic density requirement.
The reason why this parameter space remains inaccessible is that the gluon operators are strongly constrained by the LHC for \({\Lambda }> 200 \, \text {GeV}\) and can therefore not contribute significantly to the annihilation crosssection. The quark operators , on the other hand, are unconstrained by the LHC, but for \(m_\chi < m_t\), the resulting annihilation crosssection is suppressed by a factor \(m_b^2 m_\chi ^2 / {\Lambda }^6\), and therefore too small to produce a relic abundance that evades the upper bound from the relic density requirement given the perturbativity bound on the Wilson coefficients.
Comparing the right panel of Fig. 7 to the allowed parameter regions from Fig. 3 (indicated by the grey dashed lines) does however reveal a number of differences. First of all, it is now possible to saturate the relic density bound for small \(m_\chi \) (and small \({\Lambda }\)), thanks to the contribution of and , which both give suppressed signals in direct and indirect detection experiments and are therefore largely unconstrained. Moreover, for \(m_\chi > m_t\), we find that the predicted relic abundance can be substantially smaller than for the case with only dimension6 operators, thanks to the contribution from the dimension7 DMquark operators . The additional freedom in the annihilation crosssection also implies that the impact of imposing a strict relic density requirement is reduced compared to the case of dimension6 operators only and will therefore not be discussed in further detail here.
We emphasize that global fits with 24 free parameters are computationally quite challenging, in particular when the bestfit region is not strongly constrained by data. As a result the contours in Fig. 7 are less smooth than for the case of dimension6 operators only. This is particularly obvious in the right panel for DM masses around \(150\,\text {GeV}\). In this region many operators are strongly constrained by LHC data while annihilations into top quarks are kinematically forbidden. This makes it challenging to find parameter points that satisfy the relic density constraint, leading to comparably poor sampling. We have confirmed explicitly that this is not a physical effect, i.e. the allowed parameter region should be smooth and extend to \({\varOmega }_\chi h^2 = 0.12\) everywhere.
4.2 Full LHC likelihood
4.2.1 Dimension6 operators only (relic density upper bound)
We now move onto the case where the full (rather than capped) LHC likelihood is included in the scans. Figure 8 shows the allowed parameter regions in terms of \(m_\chi \) and \({\Lambda }\) for the case where we introduce a hard cutoff in the missing energy spectrum for (left panel), and the case where we introduce a smooth cutoff (right panel), as discussed in Sect. 2.2. We see that in both cases, the results differ from Fig. 2, i.e. there is a preference for higher \({\Lambda }\) values. This preference arises due to data excesses in a few high bins in the ATLAS and CMS monojet searches.
The difference in the above two results can be understood as follows. For , the missing energy spectrum arising from DM is harder than the background, while for , we either set it to zero or assume that it drops rapidly. Thus, the ratio of signaltobackground is largest for , enabling our model to (partially) fit local excesses in the data. This is illustrated in Fig. 9, which shows the missing energy spectra for background and signal in CMS when applying different EFT validity prescriptions. As seen in the distribution of pulls in the bottom panel, the CMS search observes a couple of \(1\sigma \)–\(2\sigma \) data excesses in bins around (purple bars). By including a DM signal prediction on top of the SM background, these excesses can be reduced, thus reducing the pulls and improving the overall fit to the data (green bars). However, unless the signal spectrum dies off sufficiently fast above , the model will be penalized for causing larger pulls in the highest bins, as seen for instance for the unmodified signal spectrum (lightest green bars, corresponding to \(a=0\)).
For the case where we impose a hard cutoff (left panel in Fig. 8), we find (at the \(1\sigma \) level) separate parameter regions preferred by the CMS analysis (\({\Lambda }\approx 700 \, \text {GeV}\)) and the ATLAS analysis (\({\Lambda } > rsim 1 \, \text {TeV}\)), with the overall bestfit point corresponding to the latter and being preferred relative to the backgroundonly hypothesis by \(2 {\varDelta } \ln {\mathcal {L}} = 2.2\). When allowing for a smooth cutoff, on the other hand, the bestfit solution produces a partially improved fit to both excesses simultaneously, by suppressing the signal distribution approximately proportional to . In this case, the bestfit point has \(2 {\varDelta } \ln {\mathcal {L}} = 2.6\).^{Footnote 18} We refrain from translating these numbers into pvalues, which would require extensive Monte Carlo simulations. For both choices of cutoff, the bestfit point predicts an annihilation crosssection that is slightly larger than the thermal crosssection, such that the DM particles in this case would only constitute a DM subcomponent.
We emphasise that the preference for a nonzero signal contribution is to some degree an artefact of the way in which we have implemented the EFT validity requirement. Realistic UV completions typically do not introduce sharp features in the missing energy spectrum, making it harder to fit excesses observed in individual bins. Nevertheless, our findings emphasise the need to analyse missing energy searches at the LHC in terms of specific models in order to assess whether the signal preference found in the EFT approach can be recovered (at least partially) in a more complete setting.
4.2.2 Dimension6 operators only (relic density saturated)
We have also run scans with the full LHC likelihood and requiring the DM relic density to be saturated (see Fig. 10). We find the expected changes with respect to Fig. 8, namely that small DM masses are disfavoured. For the case of a hard cutoff, the position of the bestfit point is unaffected, while for a smooth cutoff, it is pushed to slightly larger values of \(m_\chi \) and \({\Lambda }\). The respective preferences are reduced slightly to \(2 {\varDelta } \ln {\mathcal {L}} = 1.9\) and 2.0. We also find that the bestfit point requires several Wilson coefficients to be nonzero. While the LHC signal can be fitted by either or , the relic density can only be reproduced with a contribution from . This is because lead to suppressed annihilation rates in the early universe, compared to , while is strongly constrained by direct detection (see also Table 2).
A summary of the various bestfit points from our scans with dimension6 operators only is given in Table 5. We note that essentially all of our scans require a nonzero contribution from at the bestfit point in order to satisfy the relic density requirement. This is an interesting finding given that this operator is present only for Dirac fermion DM but not for Majorana fermion DM. In other words, we expect our results to change considerably for the case of Majorana fermion DM. Satisfying the relic density constraint with dimension6 operators only while evading experimental constraints will be very challenging in this case.
4.2.3 Operators up to dimension 7
In Fig. 11, we finally show the case where the full LHC likelihood is included when simultaneously considering all dimension6 and dimension7 operators, using either a hard cutoff (left) or profiling over possible smooth cutoffs (right). In the former case we find that the result looks very similar to the case of dimension6 operators only (left panel of Fig. 8) and also the likelihood at the bestfit point is very similar. In the latter case we find that it is now possible to simultaneously accommodate the upward fluctuations in the FermiLAT data (as in Fig. 2) and in the LHC data (as in Fig. 8). Doing so requires a small newphysics scale \({\Lambda }\sim 80\,\text {GeV}\) together with a rather soft cutoff \(a \approx 1.7\) of the spectrum above \({\Lambda }\). The resulting bestfit point has \(2 {\varDelta } \ln {\mathcal {L}} = 2.9\), which is the highest likelihood found in any of our scans.
A closer analysis reveals that the contribution of the dimension6 operators is in fact not necessary to accommodate the small LHC excesses, because sufficiently large contributions can also be obtained from the gluon operators. For example, the operator is essentially unconstrained by direct detection and can induce sizeable LHC signals if takes values close to the perturbativity bound. While it is challenging to satisfy the relic density requirement using only gluon operators, the allowed parameter space expands substantially when including a contribution from the dimension7 DMquark operators . As a result, the allowed regions in \(m_\chi \)–\({\Lambda }\) parameter space look very similar to the ones shown in Fig. 11 even when the Wilson coefficients of all dimension6 operators are set to zero. For the same reason we expect no significant difference between Dirac and Majorana DM particles in this case. This complex interplay between different operators only becomes apparent in a global analysis and would be missed when studying individual operators separately.
5 Conclusions and outlook
In this work we have presented the first global analysis of the full set of effective operators up to dimension 7 involving a Dirac fermion DM particle and quarks or gluons. Key to enabling such an analysis were a number of technical developments:

We have fully automated the calculation of direct detection constraints, including mixing under RG evolution and matching onto nonrelativistic effective operators at the hadronic scale, and indirect detection constraints, including cosmological constraints on energy injection;

We have adopted a novel approach to address the issue of EFT validity at the LHC. Rather than performing a simple truncation procedure, we introduce a smooth cutoff for and treat this parameter as a nuisance parameter to ensure that no artificially strong exclusions arise from the tails of the predicted distributions;

We employ highly efficient likelihood calculations and sampling algorithms that make it possible to scan over up to 24 parameters (the DM mass \(m_\chi \), the new physics scale \({\Lambda }\), 14 Wilson coefficients and 8 nuisance parameters).
In combination, these developments enable us, for the first time, to include interference effects between different operators in all parts of the analysis.
Our main result is that it is typically possible to suppress the scattering and annihilation crosssections in the nonrelativistic limit, and thereby evade direct and indirect detection constraints while satisfying the relic density requirement. Doing so does not require finely tuned cancellations or interference effects but is a direct consequence of the spin structure of the operators that we consider. The LHC, however, plays a special role, because the production of relativistic DM particles is less sensitive to the specific spin structure of the operator. As a result, we find generally strong constraints on small DM masses and large \({\Lambda }\), both for the case of dimension6 operators only and also when including dimension7 operators. Moreover, when allowing excesses in individual LHC bins to be fitted (rather than artificially capping the LHC likelihood), we find a slight preference for a DM signal with a relatively low new physics scale. Given that the magnitude of this excess is sensitive to the precise EFT validity prescription that we adopt, we have not attempted to quantify its significance within the EFT.
We find that it is typically not necessary to have simultaneous contributions from many different operators in order to find viable regions of parameter space. Indeed, large viable regions of parameter space are found both for the case when we consider only dimension6 operators and only dimension7 operators. These sets of operators can easily be generated by integrating out a heavy mediator with spin 1 or spin 0, respectively. However, we typically do require sizeable contributions from operators that violate parity and/or CP, reflecting the pressure on the simplest WIMP models from the nonobservation of a DM signal in direct and indirect detection experiments (see Ref. [110] for a similar discussion in the context of Higgs portal models).
A particularly interesting observation is that it is generally not possible to have a large hierarchy between the DM mass and the new physics scale without violating the relic density requirement. In particular, for \(m_\chi \lesssim 100 \, \text {GeV}\), constraints from the LHC require \({\Lambda }\lesssim 200 \, \text {GeV}\), meaning that the EFT is no longer valid at LHC energies and additional new degrees of freedom should be kinematically accessible. Moreover, the wellknown unitarity bound on the DM mass implies a robust upper bound on the scale of new physics of the order of \(300 \, \text {TeV}\). We also note that for masses in the TeV range CTA will have a unique chance of probing part of the currently inaccessible parameter space that is spanned between the EFT validity and the relic density constraints.
We emphasise that it is generally possible for the DM particle under consideration to constitute only a DM subcomponent (in which case, constraints from direct and indirect detection experiments are correspondingly suppressed), but large regions of viable parameter space also remain when requiring the relic density to be saturated. In future studies, it will be interesting to modify the way in which the relic density calculation is included. For example, one could consider an initial particleantiparticle asymmetry in the dark sector, which would make it possible to saturate the relic density in parameter regions that would normally predict an underabundance, while at the same time suppressing constraints from indirect detection experiments. A more radical approach would be to not perform a relic density calculation at all and simply assume that the observed relic abundance (with \(f_\chi =1\)) is achieved through some unspecified modification of standard cosmology. A detailed analysis of direct detection constraints on such a scenario is in preparation.
An exciting direction for future investigation is to embed the EFTs considered here into a more complete approach based on UVcomplete (or simplified) models. Almost all of the machinery developed for the present work will also be directly applicable in this case. The main difference arises in the interpretation of the LHC signals. If the mediator of the DM interactions is kinematically accessible at LHC energies, it will be essential to not only consider the resulting changes in the missing energy spectra, but also additional signatures arising from visible decays of the mediator [177, 178] (see Ref. [179] for a recent discussion of how to connect DM EFTs and UVcomplete models). Furthermore, close to the EFT validity boundary the presence of the mediator will also modify the results of the relic density calculation, thus affecting the target couplings for these signals. It will also be interesting to see to what extent the slight LHC excesses can be accommodated in such a setup.
Another important extension of the present work will be to also consider operators coupling DM to leptons as well as electroweak gauge and Higgs bosons in order to embed our approach into a framework that respects the unbroken electroweak symmetry. Given that the relevant RG evolution is known (and already implemented in DirectDM) and that the relevant annihilation crosssections and injection spectra can be calculated automatically, such an extension does not pose any conceptual difficulties regarding direct or indirect detection constraints and relic density calculations. Again, the most challenging part will be to include all relevant collider constraints (which in this case stem also from LEP). Given that these constraints are typically weaker than the corresponding ones for quarks, it will be interesting to see whether some of the conclusions found in the present work can be relaxed and additional viable parameter space opens up.
Finally, it will be very interesting to consider DM EFTs with nontrivial flavour structure, for example with couplings predominantly to the third generation. In such a setup, one generally expects sizeable flavourchanging neutral currents and hence it will be essential to connect the EFTs used to study DM to the ones employed in flavour physics. Such a study would be particularly exciting given the recently observed anomalies in various flavour observables (see e.g. Refs. [180,181,182]). Moreover, the effects of electroweak operator mixing on the direct detection bounds are expected to be much more pronounced in such scenarios.
Of course, the most important outstanding task is to collect more data that may shed light on the nature of DM. Upcoming LHC analyses will improve the sensitivity to missing energy signatures of DM, the next generation of direct detection experiments [174, 183, 184] will be able to probe substantially smaller scattering crosssections, and ongoing [185,186,187] and planned [136] indirect detection experiments will probe the freezeout paradigm with unprecedented precision. Our present work has shown that this effort is highly worthwhile given the wide regions of parameter space that cannot currently be excluded in a modelindependent way. Reducing the vast number of viable possibilities to explain DM therefore remains a key challenge for years to come.
Data Availability Statement
This manuscript has associated data in a data repository. [Authors’ comment: see Ref. [66], the DOI being https://doi.org/10.5281/zenodo.4836397].
Notes
These constraints also ensure that the dimensionsix operators do not explicitly break electroweak symmetry, which requires [76].
Note that as per our assumptions the Wilson coefficients are taken to be zero at scale \({\Lambda }\) and are only generated by the RG effects.
For historical reasons, in the numerical code \(\log (m_t^2/{\Lambda }^2)\) instead of \(\log (m_Z^2/{\Lambda }^2)\) was used. The effect on the numerical results is negligible.
Small remnant effects of the bottom and charm Yukawa coupling are taken into account below the EW scale via double weak insertions [79] that are included in the DirectDM code.
We emphasize that \(m_{\chi \chi }\) and are not strongly correlated in the sense that there are events with both (if the DM pair is emitted approximately in the longitudinal direction) and (if the two DM particles are light and approximately collinear). Since our approach does not modify the spectrum for , we risk overestimating the differential crosssection in this regime. However, the sensitivity of the LHC to DM EFTs typically stems from events with large , where our prescription is more appropriate.
We note that the explicit factor of \(m_q\) in the definition of not only affects the EFT validity but also directly the resulting phenomenology. Hence our results cannot be easily translated to operators with nontrivial flavour structure.
For a recent review on the effects of nonstandard cosmological scenarios, see Ref. [129].
Note that, since we include uncertainties in both the relic density calculation and the Planck measurement, \({\varOmega }_\chi h^2\) can deviate slightly from 0.120 even when we require that the DM relic abundance is saturated. In this case we set \(f_\chi = {\text {min}}({\varOmega }_\chi h^2 / 0.120, 1)\), which can therefore slightly deviate from (but never exceed) unity.
It is noteworthy that DarkAges calculates \( f_{\text {eff}} (z) \) as a redshiftdependent function instead of a single redshiftindependent coefficient \( f_{\mathrm{eff}} \), as it is implicitly assumed in Eq. (37). In order to compress the function \( f_{\text {eff}} (z) \) into this coefficient, it is convolved with a weighting function W(z) that encodes the CMB sensitivity to energy injection through s–wave annihilation as a function of redshift [145].
We note that this combination also reduces the impact of a local \(\sim 2.5\sigma \) excess in the thirdhighest bin, which would otherwise strongly bias our analysis.
A practical benefit of having a continuous likelihood penalty rather than a hard cut is that it helps guide the parameter sampler towards the viable regions in the highdimensional DM EFT parameter space.
This is based on taking an average of the asymmetric uncertainty \(m_t (m_t) = 162.9^{+2.3}_{1.6}\) GeV; see table 2 in Ref. [172].
We note that for the largest values of \(m_\chi \) and \({\Lambda }\) that we consider in these scans our approach of specifying all operators in the broken phase of electroweak symmetry and ignoring the effects of running between \(\mu = {\Lambda }\) and \(\mu = m_Z\) becomes questionable. The constraints that we obtain above the TeV scale are therefore only approximate and should be interpreted with care.
We emphasize that, although the bestfit point lies close to the boundary of the parameter space, there is no preference for even smaller values of the DM mass and hence our findings would not change when extending the scan range.
We note that this mixing effect could in principle be cancelled by contributions from additional effective operators not included in our analysis, such that even smaller event rates may be achievable.
We note that in both cases, the likelihood is very flat around the maximum and hence the precise location of the bestfit point is somewhat arbitrary.
If MadGraph and CalcHEP output is generated from a fully functional FeynRules model implementation with trivial colour structures, the only missing vertices should be fourfermion vertices.
Since GAMBIT v2.0, decaying DM is supported, such that the capability was generalised and renamed.
References
B.W. Lee, S. Weinberg, Cosmological lower bound on heavy neutrino masses. Phys. Rev. Lett. 39, 165–168 (1977)
G. Arcadi, M. Dutra et al., The waning of the WIMP? A review of models, searches, and constraints. Eur. Phys. J. C 78, 203 (2018). [arXiv:1703.07364]
R.K. Leane, T.R. Slatyer, J.F. Beacom, K.C.Y. Ng, GeVscale thermal WIMPs: not even slightly ruled out. Phys. Rev. D 98, 023016 (2018). arXiv:1805.10305
J. Fan, M. Reece, L.T. Wang, Nonrelativistic effective theory of dark matter direct detection. JCAP 1011, 042 (2010). arXiv:1008.1591
P. Agrawal, Z. Chacko, C. Kilic, R.K. Mishra, A classification of dark matter candidates with primarily spindependent interactions with matter. arXiv:1003.1912
A. Fitzpatrick, K.M. Zurek, Dark moments and the DAMACoGeNT puzzle. Phys. Rev. D 82, 075004 (2010). arXiv:1007.5325
A. Crivellin, U. Haisch, Dark matter direct detection constraints from gauge bosons loops. Phys. Rev. D 90, 115011 (2014). arXiv:1408.5046
F. D’Eramo, B.J. Kavanagh, P. Panci, You can hide but you have to run: direct detection with vector mediators. JHEP 08, 111 (2016). arXiv:1605.04917
M. Hoferichter, P. Klos, J. Menéndez, A. Schwenk, Analysis strategies for general spinindependent WIMPnucleus scattering. Phys. Rev. D 94, 063505 (2016). arXiv:1605.08043
F. Kahlhoefer, S. Wild, Studying generalised dark matter interactions with extended haloindependent methods. JCAP 10, 032 (2016). arXiv:1607.04418
J. Goodman, M. Ibe et al., Gamma ray line constraints on effective theories of dark matter. Nucl. Phys. B 844, 55–68 (2011). arXiv:1009.0008
M. Beltran, D. Hooper, E.W. Kolb, Z.C. Krusberg, Deducing the nature of dark matter from direct and indirect detection experiments in the absence of collider signatures of new physics. Phys. Rev. D 80, 043509 043509 (2009). arXiv:0808.3384
K. Cheung, P.Y. Tseng, T.C. Yuan, Gammaray constraints on effective interactions of the dark matter. JCAP 06, 023 (2011). arXiv:1104.5329
R. Harnik, G.D. Kribs, An effective theory of Dirac dark matter. Phys. Rev. D 79, 095007 (2009). arXiv:0810.5557
A. De Simone, A. Monin, A. Thamm, A. Urbano, On the effective operators for Dark Matter annihilations. JCAP 02, 039 (2013). arXiv:1301.1486
C. Karwin, S. Murgia, T.M.P. Tait, T.A. Porter, P. Tanedo, Dark matter interpretation of the FermiLAT observation toward the Galactic Center. Phys. Rev. D 95, 103005 (2017). arXiv:1612.05687
L.M. Carpenter, R. Colburn, J. Goodman, T. Linden, Indirect detection constraints on s and t channel simplified models of dark matter. Phys. Rev. D 94, 055027 (2016). arXiv:1606.04138
J. Abdallah et al., Simplified models for dark matter searches at the LHC. Phys. Dark Universe 9–10, 8–23 (2015). arXiv:1506.03116
F. Kahlhoefer, Review of LHC dark matter searches. Int. J. Mod. Phys. A 32, 1730006 (2017). arXiv:1702.02430
T. Alanne, F. Goertz, Extended dark matter EFT. Eur. Phys. J. C 80, 446 (2020). arXiv:1712.07626
T. Alanne, G. Arcadi, F. Goertz, V. Tenorth, S. Vogl, Modelindependent constraints with extended dark matter EFT. JHEP 10, 172 (2020). arXiv:2006.07174
Y. Bai, P.J. Fox, R. Harnik, The Tevatron at the frontier of dark matter direct detection. JHEP 12, 048 (2010). arXiv:1005.3797
H. Dreiner, D. Schmeier, J. Tattersall, Contact interactions probe effective dark matter models at the LHC. EPL 102, 51001 (2013). arXiv:1303.3348
N. Zhou, D. Berge, D. Whiteson, Monoeverything: combined limits on dark matter production at colliders from multiple final states. Phys. Rev. D 87, 095013 (2013). arXiv:1302.3619
P.J. Fox, R. Harnik, R. Primulando, C.T. Yu, Taking a razor to dark matter parameter space at the LHC. Phys. Rev. D 86, 015010 (2012). arXiv:1203.1662
A. Rajaraman, W. Shepherd, T.M. Tait, A.M. Wijangco, LHC bounds on interactions of dark matter. Phys. Rev. D 84, 095013 (2011). arXiv:1108.1196
J. Goodman, M. Ibe et al., Constraints on dark matter from colliders. Phys. Rev. D 82, 116010 (2010). arXiv:1008.1783
P.J. Fox, R. Harnik, J. Kopp, Y. Tsai, Missing energy signatures of dark matter at the LHC. Phys. Rev. D 85, 056011 (2012). arXiv:1109.4398
M. Beltran, D. Hooper, E.W. Kolb, Z.A. Krusberg, T.M. Tait, Maverick dark matter at colliders. JHEP 09, 037 (2010). arXiv:1002.4137
O. Buchmueller, M.J. Dolan, C. McCabe, Beyond effective field theory for dark matter searches at the LHC. JHEP 01, 025 (2014). arXiv:1308.6799
A. Belyaev, L. Panizzi, A. Pukhov, M. Thomas, Dark matter characterization at the LHC in the effective field theory approach. JHEP 04, 110 (2017). arXiv:1610.07545
F. Pobbe, A. Wulzer, M. Zanetti, Setting limits on effective field theories: the case of dark matter. JHEP 08, 074 (2017). arXiv:1704.00736
ATLAS: G. Aad et al., Search for dark matter candidates and large extra dimensions in events with a jet and missing transverse momentum with the ATLAS detector. JHEP 04, 075 (2013). arXiv:1210.4491
CMS: S. Chatrchyan et al., Search for dark matter and large extra dimensions in monojet events in \(pp\) collisions at \(\sqrt{s}=7\) TeV. JHEP 09, 094 (2012). arXiv:1206.5663
M.R. Buckley, Asymmetric dark matter and effective operators. Phys. Rev. D 84, 043510 (2011). arXiv:1104.1429
K. Cheung, P.Y. Tseng, Y.L.S. Tsai, T.C. Yuan, Global constraints on effective dark matter interactions: relic density, direct detection, indirect detection, and collider. JCAP 1205, 001 (2012). arXiv:1201.3402
J. MarchRussell, J. Unwin, S.M. West, Closing in on asymmetric dark matter I: model independent limits for interactions with quarks. JHEP 08, 029 (2012). arXiv:1203.4854
J.M. Zheng, Z.H. Yu et al., Constraining the interaction strength between dark matter and visible matter: I.Fermionic dark matter. Nucl. Phys. B 854, 350–374 (2012). arXiv:1012.2022
A. Belyaev, E. Bertuzzo et al., Interplay of the LHC and nonLHC dark matter searches in the effective field theory approach. Phys. Rev. D 99, 015006 (2019). arXiv:1807.03817
E. Bertuzzo, C.J. Caniu Barros, G. Grilli di Cortona, MeV dark matter: model independent bounds. JHEP 09, 116 (2017). arXiv:1707.00725
M. Cirelli, E. Del Nobile, P. Panci, Tools for modelindependent bounds in direct dark matter searches. JCAP 10, 019 (2013). arXiv:1307.5955
J. Kumar, D. Marfatia, Matrix element analyses of dark matter scattering and annihilation. Phys. Rev. D 88, 014035 (2013). arXiv:1305.1611
C. Balázs, T. Li, J.L. Newstead, Thermal dark matter implies new physics not far above the weak scale. JHEP 08, 061 (2014). arXiv:1403.5829
S. Liem, G. Bertone et al., Effective field theory of dark matter: a global analysis. JHEP 9, 77 (2016). arXiv:1603.05994
S. Matsumoto, S. Mukhopadhyay, Y.L.S. Tsai, Singlet Majorana fermion dark matter: a comprehensive analysis in effective field theory. JHEP 10, 155 (2014). arXiv:1407.1859
M. Blennow, P. Coloma, E. FernandezMartinez, P.A.N. Machado, B. Zaldivar, Global constraints on vectorlike WIMP effective interactions. JCAP 04, 015 (2016). arXiv:1509.01587
S. Matsumoto, S. Mukhopadhyay, Y.L.S. Tsai, Effective theory of WIMP dark matter supplemented by simplified models: singletlike Majorana fermion case. Phys. Rev. D 94, 065034 (2016). arXiv:1604.02230
GAMBIT Collaboration: P. Athron, C. Balázs et al., GAMBIT: the global and modular beyondthestandardmodel inference tool. Eur. Phys. J. C 77, 784 (2017). arXiv:1705.07908. Addendum in [190]
M. Duerr, P. Fileviez Perez, Theory for baryon number and dark matter at the LHC. Phys. Rev. D 91, 095001 (2015). arXiv:1409.8165
E. Dudas, L. Heurtier, Y. Mambrini, B. Zaldivar, Extra U(1), effective operators, anomalies and dark matter. JHEP 11, 083 (2013). arXiv:1307.0005
M. Bauer, S. Diefenbacher, T. Plehn, M. Russell, D.A. Camargo, Dark matter in anomalyfree gauge extensions. SciPost Phys. 5, 036 (2018). arXiv:1805.01904
GAMBIT Dark Matter Workgroup: T. Bringmann, J. Conrad et al., DarkBit: a GAMBIT module for computing dark matter observables and likelihoods. Eur. Phys. J. C 77, 831 (2017). arXiv:1705.07920
A.L. Fitzpatrick, W. Haxton, E. Katz, N. Lubbers, Y. Xu, The effective field theory of dark matter direct detection. JCAP 1302, 004 (2013). arXiv:1203.3542
GAMBIT Cosmology Workgroup: J.J. Renk, P. Stöcker et al., CosmoBit: a GAMBIT module for computing cosmological observables and likelihoods. JCAP 02, 022 (2021). arXiv:2009.03286
T.E. Gonzalo, GAMBIT: the global and modular BSM inference tool, in Tools for High Energy Physics and Cosmology (2021). arXiv:2105.03165
S. Bloor, T.E. Gonzalo et al., The GAMBIT universal model machine: from Lagrangians to likelihoods. arXiv:2107.00030
I.M. Shoemaker, L. Vecchi, Unitarity and monojet bounds on models for DAMA, CoGeNT, and CRESSTII. Phys. Rev. D 86, 015023 (2012). arXiv:1112.5457
G. Busoni, A. De Simone, E. Morgante, A. Riotto, On the validity of the effective field theory for dark matter searches at the LHC. Phys. Lett. B 728, 412–421 (2014). arXiv:1307.2253
G. Busoni, A. De Simone, J. Gramling, E. Morgante, A. Riotto, On the validity of the effective field theory for dark matter searches at the LHC, part II: complete analysis for the \(s\)channel. JCAP 06, 060 (2014). arXiv:1402.1275
G. Busoni, A. De Simone, T. Jacques, E. Morgante, A. Riotto, On the validity of the effective field theory for dark matter searches at the LHC part III: analysis for the \(t\)channel. JCAP 09, 022 (2014). arXiv:1405.3101
M. Endo, Y. Yamamoto, Unitarity bounds on dark matter effective interactions at LHC. JHEP 06, 126 (2014). arXiv:1403.6610
N. Bell, G. Busoni, A. Kobakhidze, D.M. Long, M.A. Schmidt, Unitarisation of EFT amplitudes for dark matter searches at the LHC. JHEP 08, 125 (2016). arXiv:1606.02722
D. Racco, A. Wulzer, F. Zwirner, Robust collider limits on heavymediator Dark Matter. JHEP 05, 009 (2015). arXiv:1502.04701
S. Bruggisser, F. Riva, A. Urbano, The last gasp of dark matter effective theory. JHEP 11, 069 (2016). arXiv:1607.02475
K. Griest, M. Kamionkowski, Unitarity limits on the mass and radius of dark matter particles. Phys. Rev. Lett. 64, 615 (1990)
GAMBIT Collaboration, Supplementary data: thermal WIMPs and the scale of new physics: global fits of dirac dark matter effective field theories (2021). https://zenodo.org/record/4836397
F. Bishara, J. Brod, B. Grinstein, J. Zupan, DirectDM: a tool for dark matter direct detection. arXiv:1708.02678
J. Brod, A. GootjesDreesbach, M. Tammaro, J. Zupan, Effective field theory for dark matter direct detection up to dimension seven. JHEP 10, 065 (2018). arXiv:1710.10218
J. Kopp, V. Niro, T. Schwetz, J. Zupan, DAMA/LIBRA and leptonically interacting Dark Matter. Phys. Rev. D 80, 083502 (2009). arXiv:0907.3159
P.J. Fox, R. Harnik, J. Kopp, Y. Tsai, LEP shines light on dark matter. Phys. Rev. D 84, 014028 (2011). arXiv:1103.0240
N. Weiner, I. Yavin, UV completions of magnetic inelastic and Rayleigh dark matter for the Fermi line(s). Phys. Rev. D 87, 023523 (2013). arXiv:1209.1093
M.T. Frandsen, U. Haisch, F. Kahlhoefer, P. Mertsch, K. SchmidtHoberg, Loopinduced dark matter direct detection signals from gammaray lines. JCAP 10, 033 (2012). arXiv:1207.3971
G. Paz, A.A. Petrov, M. Tammaro, J. Zupan, Shining dark matter in Xenon1T. Phys. Rev. D 103, L051703 (2021). https://doi.org/10.1103/PhysRevD.103.L051703. arXiv:2006.12462
B.J. Kavanagh, P. Panci, R. Ziegler, Faint light from dark matter: classifying and constraining dark matterphoton effective operators. JHEP 04, 089 (2019). arXiv:1810.00033
C. Arina, A. Cheek, K. Mimasu, L. Pagani, Light and darkness: consistently coupling dark matter to photons via effective operators. Eur. Phys. J. C 81, 223 (2021). arXiv:2005.12789
U. Haisch, F. Kahlhoefer, T.M.P. Tait, On monoW signatures in spin1 simplified models. Phys. Lett. B 760, 207–213 (2016). arXiv:1603.01267
R.J. Hill, M.P. Solon, Standard model anatomy of WIMP dark matter direct detection II: QCD analysis and hadronic matrix elements. Phys. Rev. D 91, 043505 (2015). arXiv:1409.8290
F. Bishara, J. Brod, B. Grinstein, J. Zupan, Renormalization group effects in dark matter interactions. JHEP 03, 089 (2020). arXiv:1809.03506
J. Brod, B. Grinstein, E. Stamou, J. Zupan, Weak mixing below the weak scale in darkmatter direct detection. JHEP 02, 174 (2018). arXiv:1801.04240
U. Haisch, F. Kahlhoefer, On the importance of loopinduced spinindependent interactions for dark matter direct detection. JCAP 1304, 050 (2013). arXiv:1302.4454
A. Crivellin, F. D’Eramo, M. Procura, New constraints on dark matter effective theories from standard model loops. Phys. Rev. Lett. 112, 191304 (2014). arXiv:1402.1173
U. Haisch, F. Kahlhoefer, J. Unwin, The impact of heavyquark loops on LHC dark matter searches. JHEP 07, 125 (2013). arXiv:1208.4605
A. Berlin, T. Lin, L.T. Wang, MonoHiggs detection of dark matter at the LHC. JHEP 06, 078 (2014). arXiv:1402.7074
SuperCDMS: R. Agnese et al., New results from the search for lowmass weakly interacting massive particles with the CDMS low ionization threshold experiment. Phys. Rev. Lett. 116, 071301 (2016). arXiv:1509.02448
CRESST: G. Angloher et al., Results on light dark matter particles with a lowthreshold CRESSTII detector. Eur. Phys. J. C 76, 25 (2016). arXiv:1509.01515
CRESST: A.H. Abdelhameed et al., First results from the CRESSTIII lowmass dark matter program. Phys. Rev. D 100, 102002 (2019). arXiv:1904.00498
P. Agnes et al., DarkSide50 532day dark matter search with lowradioactivity argon. Phys. Rev. D 98, 102006 (2018). https://doi.org/10.1103/PhysRevD.98.102006. arXiv:1802.07198
LUX: D.S. Akerib et al., Results from a search for dark matter in the complete LUX exposure. Phys. Rev. Lett. 118, 021303 (2017). arXiv:1608.07648
PICO: C. Amole et al., Dark matter search results from the PICO60 C\(_3\)F\(_8\) bubble chamber. Phys. Rev. Lett. 118, 251301 (2017). arXiv:1702.07666
PICO: C. Amole et al., Dark matter search results from the complete exposure of the PICO60 C\(_3\)F\(_8\) bubble chamber. Phys. Rev. D 100, 022001 (2019). arXiv:1902.04031
PandaXII: A. Tan et al., Dark matter results from first 98.7 days of data from the PandaXII experiment. Phys. Rev. Lett. 117, 121303 (2016). arXiv:1607.07400
PandaXII: X. Cui et al., Dark matter results from 54tonday exposure of PandaXII experiment. Phys. Rev. Lett. 119, 181302 (2017). arXiv:1708.06917
XENON: E. Aprile et al., Dark matter search results from a one tonyear exposure of XENON1T. Phys. Rev. Lett. 121, 111302 (2018). arXiv:1805.12562
ATLAS: G. Aad et al., Search for new phenomena in events with an energetic jet and missing transverse momentum in \(pp\) collisions at \(\sqrt{s} = 13\) TeV with the ATLAS detector. arXiv:2102.10874
CMS: A.M. Sirunyan et al., Search for new physics in final states with an energetic jet or a hadronically decaying \(W\) or \(Z\) boson and transverse momentum imbalance at \(\sqrt{s}=13\,\text{TeV}\). Phys. Rev. D 97, 092005 (2018). arXiv:1712.02345
FermiLAT: M. Ackermann et al., Searching for dark matter annihilation from Milky Way dwarf spheroidal galaxies with six years of Fermi large area telescope data. Phys. Rev. Lett. 115, 231301 (2015). arXiv:1503.02641
IceCube Collaboration: M.G. Aartsen et al., Improved limits on dark matter annihilation in the Sun with the 79string IceCube detector and implications for supersymmetry. JCAP 04, 022 (2016). arXiv:1601.00653
Planck: N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641, A6 (2020). arXiv:1807.06209
N. Anand, A.L. Fitzpatrick, W.C. Haxton, Weakly interacting massive particlenucleus elastic scattering response. Phys. Rev. C 89, 065501 (2014). arXiv:1308.6288
F. Bishara, J. Brod, B. Grinstein, J. Zupan, Chiral effective theory of dark matter direct detection. JCAP 1702, 009 (2017). arXiv:1611.00368
F. Bishara, J. Brod, B. Grinstein, J. Zupan, From quarks to nucleons in dark matter direct detection. JHEP 11, 059 (2017). arXiv:1707.06998
Particle Data Group: P.A. Zyla et al., Review of particle physics. Prog. Theor. Exp. Phys. 083, C01 (2020)
A. Crivellin, M. Hoferichter, M. Procura, Accurate evaluation of hadronic uncertainties in spinindependent WIMPnucleon scattering: disentangling two and threeflavor effects. Phys. Rev. D 89, 054021 (2014). arXiv:1312.4951
D. Djukanovic, K. Ottnad, J. Wilhelm, H. Wittig, Strange electromagnetic form factors of the nucleon with \(N_f = 2 + 1{\cal{O}}(a)\)improved Wilson fermions. Phys. Rev. Lett. 123, 212001 (2019). arXiv:1903.12566
R.S. Sufian, Y.B. Yang et al., Strange quark magnetic moment of the nucleon at the physical point. Phys. Rev. Lett. 118, 042001 (2017). arXiv:1606.07075
R. Gupta, B. Yoon et al., Flavor diagonal tensor charges of the nucleon from (2 + 1 + 1)flavor lattice QCD. Phys. Rev. D 98, 091501 (2018). arXiv:1808.07597
Flavour Lattice Averaging Group: S. Aoki et al., FLAG review 2019: Flavour Lattice Averaging Group (FLAG). Eur. Phys. J. C 80, 113 (2020). arXiv:1902.08191
J. Liang, Y.B. Yang, T. Draper, M. Gong, K.F. Liu, Quark spins and anomalous ward identity. Phys. Rev. D 98, 074505 (2018). arXiv:1806.08366
B. Pasquini, M. Pincetti, S. Boffi, Chiralodd generalized parton distributions in constituent quark models. Phys. Rev. D 72, 094029 (2005). arXiv:hepph/0510376
GAMBIT Collaboration: P. Athron et al., Global analyses of Higgs portal singlet dark matter models using GAMBIT. Eur. Phys. J. C 79, 38 (2019). arXiv:1808.10465
QCDSFUKQCD: R. Horsley, Y. Nakamura et al., Hyperon sigma terms for 2 + 1 quark flavours. Phys. Rev. D 85, 034506 (2012). arXiv:1110.4971
S. Durr et al., Lattice computation of the nucleon scalar quark contents at the physical point. Phys. Rev. Lett. 116, 172001 (2016). arXiv:1510.08013
xQCD: Y.B. Yang, A. Alexandru, T. Draper, J. Liang, K.F. Liu, \(\pi \)N and strangeness sigma terms at the physical point with chiral fermions. Phys. Rev. D 94, 054503 (2016). arXiv:1511.09089
ETM: A. AbdelRehim, C. Alexandrou et al., Direct evaluation of the quark content of nucleons from lattice QCD at the physical point. Phys. Rev. Lett. 116, 252001 (2016). arXiv:1601.01624
RQCD: G.S. Bali, S. Collins et al., Direct determinations of the nucleon and pion terms at nearly physical quark masses. Phys. Rev. D 93, 094504 (2016). arXiv:1603.00827
C. Alexandrou, S. Bacchio et al., Nucleon axial, tensor, and scalar charges and terms in lattice QCD. Phys. Rev. D 102, 054517 (2020). arXiv:1909.00485
JLQCD: N. Yamanaka, S. Hashimoto, T. Kaneko, H. Ohki, Nucleon charges with dynamical overlap fermions. Phys. Rev. D 98, 054516 (2018). arXiv:1805.10507
S. Borsanyi, Z. Fodor et al., Abinitio calculation of the proton and the neutron’s scalar couplings for new physics searches. arXiv:2007.03319
J.M. Alarcon, J. Martin Camalich, J.A. Oller, The chiral representation of the \(\pi N\) scattering amplitude and the pionnucleon sigma term. Phys. Rev. D 85, 051503 (2012). arXiv:1110.3797
M. Hoferichter, J. Ruiz de Elvira, B. Kubis, U.G. Meissner, Highprecision determination of the pionnucleon term from Roy–Steiner equations. Phys. Rev. Lett. 115, 092301 (2015). arXiv:1506.04142
V. Dmitrašinović, H.X. Chen, A. Hosaka, Baryon fields with \(U_L(3)\) Ö \(U_R(3)\) chiral symmetry. V. Pionnucleon and kaonnucleon \({{\varSigma }}\) terms. Phys. Rev. C 93, 065208 (2016). arXiv:1812.03414
J. Ruiz de Elvira, M. Hoferichter, B. Kubis, U.G. Meissner, Extracting the term from lowenergy pionnucleon scattering. J. Phys. G 45, 024001 (2018). arXiv:1706.01465
E. Friedman, A. Gal, The pionnucleon \({\sigma }\) term from pionic atoms. Phys. Lett. B 792, 340–344 (2019). arXiv:1901.03130
P. Gondolo, G. Gelmini, Cosmic abundances of stable particles: improved analysis. Nucl. Phys. A 360, 145–179 (1991)
T. Binder, T. Bringmann, M. Gustafsson, A. Hryczuk, Early kinetic decoupling of dark matter: when the standard way of calculating the thermal relic density fails. Phys. Rev. D 96, 115010 (2017). arXiv:1706.07433
D.E. Kaplan, M.A. Luty, K.M. Zurek, Asymmetric dark matter. Phys. Rev. D 79, 115016 (2009). arXiv:0901.4117
A. Pukhov, CalcHEP 2.3: MSSM, structure functions, event generation, batchs, and generation of matrix elements for other packages. arXiv:hepph/0412191
A. Belyaev, N.D. Christensen, A. Pukhov, CalcHEP 3.4 for collider physics within and beyond the Standard Model. Comput. Phys. Commun. 184, 1729–1769 (2013). arXiv:1207.6082
A. Arbey, F. Mahmoudi, Dark matter and the early Universe: a review. Prog. Part. Nucl. Phys. 119, 103865 (2021). arXiv:2104.11488
T. Bringmann, J. Edsjö, P. Gondolo, P. Ullio, L. Bergström, DarkSUSY 6: an advanced tool to compute dark matter properties numerically. JCAP 1807, 033 (2018). arXiv:1802.03399
P. Gondolo, J. Edsjo et al., DarkSUSY: computing supersymmetric dark matter properties numerically. JCAP 0407, 008 (2004). arXiv:astroph/0406204
N.F. Bell, Y. Cai, A.D. Medina, Coannihilating dark matter: effective operator analysis and collider phenomenology. Phys. Rev. D 89, 115001 (2014). arXiv:1311.6169
M.J. Baker et al., The coannihilation codex. JHEP 12, 120 (2015). arXiv:1510.03434
T. Bringmann, C. Weniger, Gamma ray signals from dark matter: concepts, status and prospects. Phys. Dark Universe 1, 194–217 (2012). arXiv:1208.5481
FermiLAT: M. Ackermann et al., The Fermi Galactic Center GeV excess and implications for dark matter. Astrophys. J. 840, 43 (2017). arXiv:1704.03910
CTA: A. Acharyya et al., Sensitivity of the Cherenkov Telescope Array to a dark matter signal from the Galactic Centre. JCAP 01, 057 (2021). arXiv:2007.16129
SuperKamiokande: K. Choi et al., Search for neutrinos from annihilation of captured lowmass dark matter particles in the Sun by SuperKamiokande. Phys. Rev. Lett. 114, 141301 (2015). arXiv:1503.04858
IceCube: M.G. Aartsen et al., Search for annihilating dark matter in the Sun with 3 years of IceCube data. Eur. Phys. J. C 77, 146 (2017). arXiv:1612.05949 [Erratum: Eur. Phys. J. C 79, 214 (2019)]
N. Avis Kozar, A. Caddell, L. FraserLeach, P. Scott, A.C. Vincent, Capt’n General: a generalized stellar dark matter capture and heat transport code (2021). arXiv:2105.06810
R. Catena, B. Schwabe, Form factors for dark matter capture by the Sun in effective theories. JCAP 04, 042 (2015). arXiv:1501.03729
N. Vinyoles, A.M. Serenelli et al., A new generation of standard solar models. Astrophys. J. 835, 202 (2017). arXiv:1611.09867
M. Asplund, N. Grevesse, A.J. Sauval, P. Scott, The chemical composition of the Sun. ARA&A 47, 481–522 (2009). arXiv:0909.0948
IceCube Collaboration: M.G. Aartsen, R. Abbasi et al., Search for dark matter annihilations in the Sun with the 79String IceCube detector. Phys. Rev. Lett. 110, 131302 (2013). arXiv:1212.4097
P. Scott, C. Savage, J. Edsjö, The IceCube Collaboration: R. Abbasi et al., Use of eventlevel neutrino telescope data in global fits for theories of new physics. JCAP 11, 57 (2012). arXiv:1207.0810
T.R. Slatyer, Indirect dark matter signatures in the cosmic dark ages. I. Generalizing the bound on swave dark matter annihilation from Planck results. Phys. Rev. D 93, 023527 (2016). arXiv:1506.03811
T.R. Slatyer, Indirect dark matter signatures in the cosmic dark ages II. Ionization, heating and photon production from arbitrary energy injections. Phys. Rev. D 93, 023521 (2016). arXiv:1506.03812
P. Stöcker, M. Krämer, J. Lesgourgues, V. Poulin, Exotic energy injection with ExoCLASS: application to the Higgs portal model and evaporating black holes. JCAP 1803, 018 (2018). arXiv:1801.01871
Planck: N. Aghanim et al., Planck 2018 results. V. CMB power spectra and likelihoods. Astron. Astrophys. 641, A5 (2020). arXiv:1907.12875
F. Beutler, C. Blake et al., The 6dF Galaxy Survey: baryon acoustic oscillations and the local Hubble constant. MNRAS 416, 3017–3032 (2011). arXiv:1106.3366
A.J. Ross, L. Samushia et al., The clustering of the SDSS DR7 main Galaxy sample—I. A 4 per cent distance measure at z = 0.15. MNRAS 449, 835–847 (2015). arXiv:1409.3242
BOSS: S. Alam et al., The clustering of galaxies in the completed SDSSIII Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample. MNRAS 470, 2617–2652 (2017). arXiv:1607.03155
J. Kopp, Constraints on dark matter annihilation from AMS02 results. Phys. Rev. D 88, 076013 (2013). arXiv:1304.1184
L. Bergström, T. Bringmann, I. Cholis, D. Hooper, C. Weniger, New limits on dark matter annihilation from AMS cosmic ray positron data. Phys. Rev. Lett. 111, 171101 (2013). arXiv:1306.3983
A. Ibarra, A.S. Lamperstorfer, J. Silk, Dark matter annihilations and decays after the AMS02 positron measurements. Phys. Rev. D 89, 063539 (2014). arXiv:1309.2570
L. Bergstrom, J. Edsjo, P. Ullio, Cosmic antiprotons as a probe for supersymmetric dark matter? Astrophys. J. 526, 215–235 (1999). arXiv:astroph/9902012
T. Bringmann, P. Salati, The galactic antiproton spectrum at high energies: background expectation vs. exotic contributions. Phys. Rev. D 75, 083006 (2007). arXiv:astroph/0612514
A. Cuoco, M. Krämer, M. Korsmeier, Novel dark matter constraints from antiprotons in light of AMS02. Phys. Rev. Lett. 118, 191102 191102 (2017). arXiv:1610.03071
J. Heisig, M. Korsmeier, M.W. Winkler, Dark matter or correlated errors: systematics of the AMS02 antiproton excess. Phys. Rev. Res. 2, 043017 (2020). arXiv:2005.04237
M. Boudaud, Y. Génolini et al., AMS02 antiprotons’ consistency with a secondary astrophysical origin. Phys. Rev. Res. 2, 023022 (2020). arXiv:1906.07119
G. Jóhannesson et al., Bayesian analysis of cosmicray propagation: evidence against homogeneous diffusion. Astrophys. J. 824, 16 (2016). arXiv:1602.02243
M. Bauer, M. Klassen, V. Tenorth, Universal properties of pseudoscalar mediators in dark matter extensions of 2HDMs. JHEP 07, 107 (2018). arXiv:1712.06597
A.J. Brennan, M.F. McDonald, J. Gramling, T.D. Jacques, Collide and conquer: constraints on simplified dark matter models using monoX collider searches. JHEP 05, 112 (2016). arXiv:1603.01366
A. Alloul, N.D. Christensen, C. Degrande, C. Duhr, B. Fuks, FeynRules 2.0—a complete toolbox for treelevel phenomenology. Comput. Phys. Commun. 185, 2250–2300 (2014). arXiv:1310.1921
J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, T. Stelzer, MadGraph 5: going beyond. JHEP 06, 128 (2011). arXiv:1106.0522
T. Sjostrand, S. Mrenna, P.Z. Skands, A brief introduction to PYTHIA 8.1. Comput. Phys. Commun. 178, 852–867 (2008). arXiv:0710.3820
DELPHES 3: J. de Favereau, C. Delaere et al., DELPHES 3, a modular framework for fast simulation of a generic collider experiment. JHEP 02, 057 (2014). arXiv:1307.6346
CMS Collaboration, Simplified likelihood for the reinterpretation of public CMS results. CMSNOTE2017001 (2017)
GAMBIT Collider Workgroup: C. Balázs, A. Buckley et al., ColliderBit: a GAMBIT module for the calculation of highenergy collider observables and likelihoods. Eur. Phys. J. C 77, 795 (2017). arXiv:1705.07919
GAMBIT Collaboration: P. Athron et al., Combined collider constraints on neutralinos and charginos. Eur. Phys. J. C 79, 395 (2019). arXiv:1809.02097
M.J. Reid et al., Trigonometric parallaxes of high mass star forming regions: the structure and kinematics of the Milky Way. Astrophys. J. 783, 130 (2014). arXiv:1401.5377
A.J. Deason, A. Fattahi et al., The local highvelocity tail and the galactic escape speed. MNRAS 485, 3514–3526 (2019). arXiv:1901.02016
ATLAS: G. Aad et al., Measurement of the topquark mass in \(t{\bar{t}}+1\)jet events collected with the ATLAS detector in \(pp\) collisions at \(\sqrt{s}=8\) TeV. JHEP 11, 150 (2019). arXiv:1905.02302
GAMBIT Scanner Workgroup: G.D. Martinez, J. McKay et al., Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module. Eur. Phys. J. C 77, 761 (2017). arXiv:1705.07959
LUXZEPLIN: D.S. Akerib et al., Projected WIMP sensitivity of the LUXZEPLIN dark matter experiment. Phys. Rev. D 101, 052002 (2020). arXiv:1802.06039
DARWIN: J. Aalbers et al., DARWIN: towards the ultimate dark matter detector. JCAP 11, 017 (2016). arXiv:1606.07001
C.E. Aalseth et al., DarkSide20k: a 20 tonne twophase LAr TPC for direct dark matter detection at LNGS. Eur. Phys. J. Plus 133, 131 (2018). arXiv:1707.08145
M. Chala, F. Kahlhoefer, M. McCullough, G. Nardini, K. SchmidtHoberg, Constraining dark sectors with monojets and dijets. JHEP 07, 089 (2015). arXiv:1503.05916
M. Fairbairn, J. Heal, F. Kahlhoefer, P. Tunney, Constraints on Z’ models from LHC dijet searches and implications for dark matter. JHEP 09, 018 (2016). arXiv:1605.07940
I. Bischer, T. Plehn, W. Rodejohann, Dark matter EFT, the thirdneutrino WIMPs. SciPost Phys. 10, 039 (2021). arXiv:2008.04718
R. Barbieri, A view of flavour physics in 2021. Acta Phys. Polon. B 52, 789 (2021). https://doi.org/10.5506/APhysPolB.52.789. arXiv:2103.15635
ATLAS, CMS, LHCb: E. Graverini, Flavour anomalies: a review. J. Phys. Conf. Ser. 1137, 012025 (2019). arXiv:1807.11373
LHCb: R. Aaij et al., Test of lepton universality in beautyquark decays. arXiv:2103.11769
PandaX: H. Zhang et al., Dark matter direct search sensitivity of the PandaX4T experiment. Sci. China Phys. Mech. Astron. 62, 31011 (2019). arXiv:1806.02229
XENON: E. Aprile et al., Projected WIMP sensitivity of the XENONnT dark matter experiment. JCAP 11, 031 (2020). arXiv:2007.08796
MAGIC, FermiLAT: M.L. Ahnen et al., Limits to dark matter annihilation crosssection from a combined analysis of MAGIC and FermiLAT observations of dwarf satellite galaxies. JCAP 02, 039 (2016). arXiv:1601.06590
H.E.S.S.: H. Abdallah et al., Search for dark matter annihilations towards the inner Galactic halo from 10 years of observations with H.E.S.S. Phys. Rev. Lett. 117, 111301 (2016). arXiv:1607.08142
AMS: M. Aguilar et al., The Alpha Magnetic Spectrometer (AMS) on the international space station: part II—results from the first seven years. Phys. Rep. 894, 1–116 (2021)
P. Scott, Pippi—painless parsing, postprocessing and plotting of posterior and likelihood samples. Eur. Phys. J. Plus 127, 138 (2012). arXiv:1206.2245
A. Semenov, LanHEP: a package for the automatic generation of Feynman rules in field theory. Version 3.0. Comput. Phys. Commun. 180, 431–454 (2009). arXiv:0805.0555
GAMBIT Collaboration: P. Athron, C. Balázs et al., GAMBIT: the global and modular beyondthestandardmodel inference tool. Addendum for GAMBIT 1.1: Mathematica backends, SUSYHD interface and updated likelihoods. Eur. Phys. J. C 78, 98 (2018). arXiv:1705.07908. Addendum to [48]
Acknowledgements
We thank all members of the GAMBIT community as well as Fady Bishara for discussions and checks. For computing, we thank PRACE for awarding us access to Marconi at CINECA and JoliotCurie at CEA. This project was also undertaken with the assistance of resources and services from the National Computational Infrastructure, which is supported by the Australian Government. We thank Astronomy Australia Limited for financial support of computing resources, and the Astronomy Supercomputer Time Allocation Committee for its generous grant of computing time. We thank Juan Fuster, Adrián Irles, Davide Melini and Marcel Vos for clarifications regarding Ref. [172]. PA is supported by the Australian Research Council Future Fellowship grant FT160100274, and PA, CB, TEG and MW also acknowledge support from ARC Discovery Project DP180102209. NAK and ACV are supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute and NSERC, with equipment funded by the Canada Foundation for Innovation and the Province of Ontario, and supported by the Queen’s Centre for Advanced Computing. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario. AB acknowledges support by F.N.R.S. through the F.6001.19 convention, JB and JZ by DOE grant DESC0011784, BF by the Horizon 2020 Marie SkłodowskaCurie actions (EU; H2020MSCAIF2016752162), WH by a Royal Society University Research Fellowship, FK, TEG and PSt from the DFG Emmy Noether Grant No. KA 4662/11 and Grant 396021762 – TRR 257, JJR by Katherine Freese through a grant from the Swedish Research Council (Contract No. 63820138993). MTP is supported by the Argelander StarterKit Grant of the University of Bonn and BMBF Grant No. 05H19PDKB1. AF is supported by an NSFC Research Fund for International Young Scientists grant 11950410509. PS acknowledges funding support from the Australian Research Council under Future Fellowship FT190100814. MW and AS are further supported by the Australian Research Council under Centre of Excellence CE200100008. This article made use of pippi v2.1 [188].
Author information
Authors and Affiliations
Consortia
Corresponding author
Appendices
Appendix A: DirectDM interface
We briefly describe the GAMBIT interface to the new backend DirectDM, its interface to DDCalc, and how to interface a new model to DirectDM. For more background on the technical aspects of the GAMBIT framework, please refer to the original GAMBIT manual [48], and the GUM paper [55, 56].
DirectDM matches Wilson coefficients of a relativistic EFT onto a nonrelativistic EFT valid at the nuclear scale. The GAMBIT implementation interfaces with the Python version of this package.
Relativistic Wilson coefficients can be defined at the 3, 4 or 5quark scale, with the capability . For a given model, a new module function providing this capability should be written, returning the type (). Once this capability has been fulfilled, GAMBIT uses the module function to call the DirectDM backend via the convenience function . This provides the capability which can be connected to the DDCalc backend.
This module function providing the capability depends on the capability , of native GAMBIT type . supplies the particle information about the WIMP candidate, such as its spin, mass, and whether or not it is selfconjugate, extracted from the particle database and either the spectrum or model parameters.
As an example, consider a simplified model where a vector mediator governs the interaction between dtype quarks and a fermionic DM candidate \(\chi \), with the following interaction Lagrangian,
The model implementation within GAMBIT will contain four free parameters: the couplings \(g_\chi \) and \(g_b\), the DM mass \(m_\chi \), and the mediator mass \(m_V\). The model definition for the above simplified model looks like:
The information about the WIMP properties should be added to the particle database, if it does not exist already, in the following format
and the module function should be modified accordingly, adding the current model as allowed
and providing a source for the mass of the DM candidate, in this case from the model parameters, as
If we integrate out the mediator in Eq. (45), the interaction term becomes
The operator in DirectDM corresponding to this interaction is . We identify the relevant coefficient to pass to DirectDM as \(g_\chi g_b / m_V^2\). This is simply implemented in DarkBit by the following source code:
plus a new matching entry in ,
For a full definition of the operator basis used in DirectDM, we refer the reader to Refs. [67, 68].
When DirectDM is used, the user must also scan over the model , which contains (nuisance) parameters used in the matching and running routines in DirectDM. These are defined in Table 1 of Ref. [67]. We provide a YAML file containing the default values used in DirectDM (see the file in the $ directory).
Appendix B: UFO to CalcHEP
ufo_to_mdl is a simple Python tool distributed with GAMBIT v2.1 and above, and is integrated in the GUM framework. ufo_to_mdl is located at $. It can also be run as a standalone tool, using either Python2 or Python3. Below we briefly describe the motivation for ufo_to_mdl and how to use it.
The purpose of ufo_to_mdl is to generate CalcHEP input (.mdl files) from UFO files. The motivation for this tool’s creation is that FeynRules does not generate fourfermion CalcHEP output, but it can create such output for MadGraph. In fact, at the time of writing, LanHEP [189] is the only package that supports automatic generation of fourfermion contact interactions for CalcHEP files. ufo_to_mdl allows the user to study fourfermion interactions using CalcHEP (and correspondingly, micrOMEGAs), effectively creating a pathway from FeynRules to CalcHEP for effective theories of this kind. In the context of GAMBIT and the GUM pipeline, ufo_to_mdl allows the user to study EFTs of DM using the routines provided by micrOMEGAs and CalcHEP inside of the GAMBIT framework, such as relic density calculations, direct detection rates, and indirect detection via the Process Catalogue (see the DarkBit manual [52] for details).
Usage of ufo_to_mdl is straightforward. There are two modes ufo_to_mdl can be operated in: comparison mode and conversion mode. The mode integrated into the GUM pipeline is the comparison mode, which compares two directories containing .ufo and .mdl files generated by FeynRules:
This ensures that all vertices in the MadGraph files are present in the CalcHEP files. ufo_to_mdl does not explicitly check that the vertex functions and Lorentz indices are in agreement; it solely checks the particle content of the vertices. If there are vertices missing from the CalcHEP files,^{Footnote 19}ufo_to_mdl generates these vertices and writes a set of corrected CalcHEP files to a new directory .
In the case of fourfermion operators, ufo_to_mdl adds an additional auxiliary field to the particle content, and creates two 3field interactions by way of this new auxiliary mediator particle, following the prescription described in Chapter 8 of the CalcHEP manual [128]. An auxiliary field has no momentum dependence and serves only to split the vertex into a form in which CalcHEP can use. The order of fields generated by ufo_to_mdl will be identical to those in the MadGraph files, i.e. a vertex
would be broken up into two vertices,
where \({\varGamma }_\chi \) is a generic Dirac structure contracted with the field \(\chi \), and \(\phi \) is the auxiliary field, with Lorentz indices corresponding to \({\varGamma }\) (either scalar, vector or tensor). As a result, operators in FeynRules files should be written pairwise.
As noted above, ufo_to_mdl can also be used as a standalone tool independent of the GUM pipeline. Running ufo_to_mdl in conversion mode, with only a directory containing MadGraph files as input,
will generate .mdl files from scratch and save them in a new directory with name . The version of ufo_to_mdl released with GAMBIT v2.1 does not support nontrivial colour structures and will throw an error if it is asked to generate a vertex with implicit colour structure.
Appendix C: CMB energy injection
In order to provide CMB constraints from energy injection through decays and annihilation of DM, the yields dN/dE of photons, positrons and electrons produced in these processes need to be known. With GAMBIT v2.1, the existing capabilities for the calculation of photon yields (^{Footnote 20}) were generalised and capabilities that calculate the yields of positrons () and electrons () were introduced. To support future analyses of charged cosmic rays, we also introduced the capabilities and that calculate the yields of antiprotons and antideuterons, respectively. These capabilities are, however, not used for the CMB energy injection calculations.
Once the yields are known, they need to be passed to DarkAges via the capability to derive the effective efficiency function \( f_{\mathrm{eff}} (z) \). For maximal flexibility, we have implemented the function that automatically provides the inputs for DarkAges based on the modeldependent , and the yields for photons, electrons and positrons. Once these capabilities have been provided, no further input from the user is needed.
To enable CMB energy injection constraints, the user also needs to declare that the model in question can be mapped to one of the energy injection “flag” models ( or ) and their parameters. This can be done via a friend relationship to the appropriate “flag” model.
Assuming that the model under consideration contains annihilating DM particles, the user has to define a relation to , and its two parameters and . It is important to note that the model implicitly assumes that the DM particle constitutes all of DM (\( f_\chi =1 \)) and that it is selfconjugate. In case that the particle is not selfconjugate, the parameter needs to be rescaled by \( \kappa =1/2 \). Likewise, if the DM candidate does not constitute all of DM, needs to be rescaled by \( f_\chi ^2 \).
To define the translation function, the user has to make sure that the definition of the model is known, i.e. the following header is included:
Furthermore, the translation function and its dependencies need to be defined by including the following lines to the definition of the model in question:
Note that this definition makes use of the capability, described in App. A, in order to get the mass of the DM candidate and the information whether the DM candidate is selfconjugate or not. In case that this capability is not defined for the model in question, this dependency has to be replaced by equivalent dependencies. For the translation function defined above, the source code looks like this:
Note that this has to be placed in the correct namespace:
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funded by SCOAP^{3}
About this article
Cite this article
Athron, P., Kozar, N.A., Balázs, C. et al. Thermal WIMPs and the scale of new physics: global fits of Dirac dark matter effective field theories. Eur. Phys. J. C 81, 992 (2021). https://doi.org/10.1140/epjc/s10052021097126
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjc/s10052021097126