Mapping monojet constraints onto Simplified Dark Matter Models

The move towards simplified models for Run II of the LHC will allow for stronger and more robust constraints on the dark sector. However there already exists a wealth of Run I data which should not be ignored in the run-up to Run II. Here we reinterpret public constraints on generic beyond-standard-model cross sections to place new constraints on a simplified model. We make use of an ATLAS search in the monojet $+$ missing energy channel to constrain a representative simplified model with the dark matter coupling to an axial-vector $Z'$. We scan the entire parameter space of our chosen model to set the strongest current collider constraints on our model using the full 20.3 fb$^{-1}$ ATLAS 8 TeV dataset and provide predictions for constraints that can be set with 20 fb$^{-1}$ of 14 TeV data. Our technique can also be used for the interpretation of Run II data and provides a broad benchmark for comparing future constraints on simplified models.


Introduction
In recent years Effective Field Theories (EFTs) have become a popular framework with which to constrain the dark sector at the LHC [1][2][3][4][5][6][7][8][9][10][11][12][13]. In the simplest cases, the dark couplings and mediator masses are combined into a single effective energy scale, Λ, 1 leaving this and the dark matter mass, m DM , as the only free parameters for each effective operator. EFT constraints have the advantage of being relatively model-independent, allowing constraints to be placed across a broad range of models and parameters. In addition they facilitate an easy comparison with direct detection experiments via the shared energy scale Λ. However it is now clear that EFTs must be used with extreme care at LHC energies, where the energy scale is large enough that the approximations used in the construction of EFTs can not be assumed to be valid. At these energies and luminosities, the energy carried by the mediator is usually larger than the mediator mass, violating the EFT approximations, except in the case of large mediator masses or for dark-sector couplings approaching the perturbativity limit [12][13][14][15][16][17][18][19][20][21][22]. Depending on the mass and width of the mediator, this can lead to EFT constraints that are either stronger or weaker than the constraints would be on a UV-complete model, reducing their utility and making their validity questionable.
One solution is to rescale EFT constraints, by truncating the simulated signal such that only events for which the EFT approximation are valid are used to derive constraints [15,23,24]. This weakens constraints but at the same time makes them substantially more robust, which is critical when considering bounds on beyond-standard-model parameters. Whilst this technique has the advantage of maintaining some of the elegance of EFTs, it also has the serious disadvantage that it does not make full use of all potential signal events available in a UV complete model and so does not address the region of parameter space where EFT constraints are too weak. To constrain this region we need to consider models where the mediator can be resolved. On the other hand, the parameter space of full, well-motivated models such as supersymmetry [25] or extra dimensions [26] is broad, and by focusing solely on such models we run the risk of missing more generic signatures of the dark sector.
Hence, the usage of simplified models is now advocated by a number of groups [27][28][29][30][31][32]. Here we will use publicly available ATLAS constraints on the monojet + missing energy channel to constrain a simplified model with dark matter coupling to the standard model via exchange of an axial-vector Z mediator. The original search was used to constrain EFTs, however the same data and analysis can be used to constrain a simplified model of choice through the model-independent limit on the visible cross section contribution from beyond-standard-model processes. Such a reanalysis only requires simulation of the signal in the new model for each point in parameter space.
Simplified models have the advantage of a relatively small set of free parameters, and do not encounter the same validity problems as EFTs. However, the parameter space is still larger than for EFTs, which often necessitates arbitrary choices for one or more parameters in order to constrain the remaining free parameters. Here we will instead leave the dark matter mass, mediator mass, and coupling strength all as free parameters which we scan over and constrain in contours.
In addition, we derive constraints using an approximation to the signal cross section where the width of the mediator factors out into the normalization of the cross section. In this approximation, the coupling strength of the model affects only the normalization of the signal and not the spectrum. This greatly reduces the computational expense of signal simulation by reducing the dimensionality of the scan over parameter space by 1. We test the validity of this technique by explicitly comparing the constraints obtained with and without this approximation.
In Section 2, we outline the choice of simplified model that we will be constraining. In Section 3, we describe our technique for converting the model-independent constraints on the visible monojet cross-section into constraints on this simplified model. In Section 4 we present our results, before we give our concluding remarks in Section 5.

Model
We consider a widely-used benchmark simplified model where Dirac DM interacts with the SM via a Z -type mediator. This is described by the following Lagrangian interaction term: where g V i , g A i are respectively the vector and axial-vector coupling strengths between the mediator and quarks (i = q) and DM (i =DM). This is a well-motivated simplified model that has been studied extensively, including searches by CMS [33] and ATLAS [34], and numerous other groups both in the UV complete and EFT limits, e.g. Ref. [29,31]. It is part of the wider family of dark Z portal models which have been studied previously in e.g. [35][36][37][38]. The LHC is relatively insensitive to the mixture of Vector/Axial-vector couplings [24], however this ratio has a large effect on the sensitivity of direct detection experiments to this model. A vector coupling induces a spin-independent (SI) WIMPnucleon scattering rate, while an axial-vector coupling induces a spin-dependent (SD) rate [39]. Current bounds on SI interactions are much stronger than those on SD, to the point where direct detection constraints are generally stronger than LHC constraints on models with pure vector couplings, and vice-versa for pure axial-vector couplings, as seen in e.g. Ref. [40]. For this reason we consider a pure axial-vector coupling, setting g V DM = g V q = 0, and defining g DM ≡ g A DM , g q ≡ g A q . We assume minimal flavour violation (MFV) [41], such that the quark-mediator coupling g q is the same for each species of quark. We require that g DM , g q ≤ 4π individually in order for the couplings to remain in the perturbative regime.
For the model we consider, the total width of the axial-vector mediator is given by: where M is the mediator mass. With the assumption that g q is equal for each flavor of quark the width can become very large, rising above Γ ∼ M for relatively small values of g q or g DM , for example at g q = g DM ≈ 1.45 when g q = g DM . We note that such large widths makes the assumption that the propagator has a Breit-Wigner form used in our event generation questionable (see for example [42] for a recent discussion in the context of Higgs physics) but since we are interested in limit-setting we will not consider this problem in any more detail. The width above assumes no additional decay channels aside from quarks and DM, however it is conceivable that such a mediator could decay to standard model leptons or other particles. Given that the width to quarks alone is already very large and the possible couplings to other particles are unknown, we confine ourselves to the more 'minimal' model where the mediator couples only to quarks and DM. For a study of how the limits change when the width is manually made larger (without considering specific additional decay modes) see [31].

Reinterpreting Monojet Constraints
We reinterpret the ATLAS monojet results for 10.5 fb −1 of 8 TeV data [43] using the simplified model introduced above. Our signal prediction is obtained by implementing the model in the FeynRules [44] and MadGraph5 aMC@NLO 2.1.2 [45] framework to generate leading order (LO) parton level events using the NNPDF2.3 LO parton distribution functions (PDFs) [46]. These are matched to Pythia 8.185 [47] using the MLM algorithm with a matching scale of 80 GeV 2 for showering and hadronisation using tune 4C. We generate χχ + 0, 1, and 2 jets in the matrix element before matching to the parton shower. We use the default MadEvent factorization and renormalization scales (µ R,F ) which in this case both are approximately the transverse mass of the χχ system. Our approach only makes leading order + parton shower (LOPS) predictions compared to the next-to-leading order + parton shower (NLOPS) predictions used in a similar study of CMS results [48] in [29], which means we suffer from larger theoretical uncertainty due to scale dependencies which we can attempt to estimate by varying our choice of µ R,F by a factor of two. This shows a weak dependence on the choice of scales of +10% −5% for a few representative choices of M, m DM which is clearly not a realistic estimate of the uncertainty: previous studies [49][50][51] with other choices of scales have found fixed-order NLO corrections ranging from ∼ 20 − 40%. We do however note that based on the results in [51], we expect fixed-order NLO corrections to ultimately be modest after matching to a parton shower and applying the ATLAS monojet analysis cuts since the parton shower dilutes differences, helped by the loose cuts on additional jets. As such they should have a limited impact on our quantitative results and be negligible for qualitative results.
We analyze the generated events using the ATOM framework [52,53] based on Rivet [54]. We first divide the final state into topological clusters and find jets with the anti-k t algorithm [55] using R = 0.4 in FastJet [56]. We then perform a smearing of the p T of these jets based on typical values for the ATLAS detector, leaving the E miss T unsmeared 3 . Finally we apply the cuts from [43]: We require at most two jets with p T > 30 GeV and |η| < 4.5, with |η j1 | < 2 and ∆φ(j2, E miss T ) > 0.5 where j1 and j2 are the leading and subleading jet respectively. We define four signal regions based on p j1 T and E miss T : The procedure has been validated by recreating the ATLAS limits set on Λ for the D8 EFT operator which corresponds to our simplified model. A comparison for SR3 is presented in table 2. We consistently overestimate the limit by a few percent which reflects the less advanced nature of our detector simulation, however the agreement is good enough for our purposes as we have sub-2% differences for m DM values which are relevant for us. Note that we only perform the comparison for SR3 as it usually is the most discerning signal region and the only one for which ATLAS results are reported, however we expect the results to be similar for the other signal regions.  Table 2: Comparison of limits set on the D8 EFT operator by ATLAS [43] and us using only SR3. 3 We are not aware of any ATLAS E miss T smearing values which could be unambiguously applied to our case, based on the results in [57] we expect the plateau to have been reached for all our signal regions however.
Some past constraints on simplified models have used a fixed benchmark width. In this case, the cross section is only sensitive to the product g DM · g q and not to the couplings individually; Further, this easily factorises out, dσ(g DM , g q ) = (g DM · g q ) 2 dσ(g DM = g q = 1), (3.1) which simplifies the analysis since the coupling affects only the magnitude of the signal, not the spectral shape. Including the physical width complicates things, since now both the magnitude and signal spectrum have a dependence on both g DM · g q and g q /g DM . To deal with this, we choose a fixed g q /g DM and scan in the M − m DM − g DM · g q parameter space, interpolating to find 95% confidence level (CL) exclusion contours 4 for each signal region. We then find the most discerning signal region for each m DM , M point and use this to create an interpolated 95% CL exclusion contour plot which makes use of all the signal regions. Unfortunately this is a necessary complication if one wants to present 2D contour limits on g DM · g q when the width is known. In fact, for a given product of coupling strengths g DM · g q , Ref. [58] found that using a benchmark width can lead to unphysical widths for which no value of g q /g DM reproduces the true width. However it is possible to make an approximation for the cross section in the resonant region as σ ∝ g 2 q g 2 DM /Γ (for fixed M, m DM ), which allows us to set limits on g DM ·g q while only spending computing on a single scan over M − m DM . This approximation should work well for the part of parameter space where Γ M [18, 59] but will fail for larger widths and more importantly ignores PDFs, and we will present a comparison showing this in 4.3.

Results
Our results using interpolation in M − m DM − g DM · g q are presented in figure 1, results using the cross section approximation including the width mentioned above are presented in figure 2, and the ratios of the limits set in the two cases are presented in figure 3.
We see that the limits are generally in the region of parameter space where the width of the mediator is large, often larger than its mass for g q /g DM = 2 or 5. For lower values of the coupling ratio we see resonant enhancement of the cross section as one would expect which allows relatively strong limits to be set when the mediator is kinematically allowed to be produced on-shell.

Limits from dijet resonances
We can attempt to make use of limits from dijet resonance searches [33,[60][61][62] to further constrain our model: the dashed white line on the plots show where the width of the mediator becomes narrow enough to potentially violate such constraints (we take this to be Γ/M 0.05 to be conservative, but note that there are recent searches [62] which have constrained much wider resonances). We note that this happens for M 500 (600) GeV for g q /g DM = 1/2 (1/5) and m DM < 100 (150) GeV. Comparing to the detailed Z dijet   analysis in [63] and the recent ATLAS update in [60] and assuming the results won't change drastically when using an axial-vector coupling compared to a vector one, we see that there is some potential to this procedure. Due to the sensitive dependence on the width it is worth stressing that since we assume no additional decays for the mediator, constraints set using this method can not be considered conservative: the width we use is the minimum width assuming MFV, and we currently have no way of knowing how realistic this estimate is. We also note that interference effects with the Z/γ * should be properly taken into account when using this strategy -we expect these to play a similar role as in Drell-Yan [64] and have checked that this appears to be the case but a detailed analysis is outside the scope of this paper.
It is also possible to make use of dijet angular distributions which are sensitive to wider resonances than the dijet mass spectrum [65,66], but we make no attempts to investigate this option here.

Comparison to previous results
Our results are complementary to those presented in [29,31] as we use ATLAS monojet constraints instead of CMS ones. Our limits are generally weaker which is expected for at least three reasons: we use a smaller data set (10.5 fb −1 versus 19.5 fb −1 ) and provide 95% instead of 90% CL exclusion limits 5 , and our LOPS calculation might underestimate the cross section compared to the more accurate NLOPS calculation in [29]. However taking these factors into account there is excellent qualitative agreement both in the shape and absolute values of the limits set which reflects the similarity of the ATLAS and CMS monojet searches. When comparing to [31] we see that increasing the g q /g DM ratio has a similar effect to allowing for additional invisible decay modes by using larger widths for the mediator as one would expect.

Using a cross section approximation including the width
As mentioned at the end of section 3, we can compare our results to ones obtained by reweighting the cross section for a single value of g DM · g q to see how well the simple cross section approximation σ ∝ g 2 q g 2 DM /Γ reproduces the full results. The results using this reweighting are presented in figure 2 and the ratio of the limits in figure 3. As expected it works well for sufficiently small values of g DM · g q that Γ M but fails when M 2m DM and for higher values of g DM ·g q mainly due to ignoring PDFs. As a rough rule of thumb, the approximation is reasonable for limit-setting purposes as long as you restrict the parameter space to the region where Γ M/2 for all values of g q /g DM considered here, but the more limited the constraints are by PDFs, the worse it becomes (hence why the lower g q /g DM values which probe higher M show deviations at lower values of Γ/M ).

Conclusion
As the LHC approaches Run II there is a clear move towards supplementing EFT analyses with simplified models, as a stronger and more robust way to constrain the dark sector. These same arguments apply to Run I data, and thus it is useful to reinterpret existing constraints on the dark sector in the simplified model framework. This has the added benefit of allowing clearer benchmarks and comparisons for future studies of simplified models at higher LHC energies and luminosities. We have demonstrated this with constraints on a simple Z model, with an axial-vector coupling. This leads to constraints that are consistent and competitive with dedicated searches, while retaining a broad scan of the parameter space. Whilst the scope of this analysis is limited to a single simplified model, this technique shows good prospects for the reinterpretation of existing constraints across a broader model-space.
The parameter space for simplified models spans a minimum of 4 dimensions, making the parameter scan and visualisation of the subsequent constraints more challenging than for EFTs. The common restriction to 2-D slices of parameter space does allow for easy comparison between several constraints, but reduces our knowledge of the model as a whole. Here we instead scan over the full 4-D parameter space, presenting results as contours, allowing us to retain the maximum information possible on the dark sector in the minimum number of figures.
We have also studied the use of an approximation to the cross section that reduces the dimensionality of the parameter space which requires full simulation, and given some rough guidelines for its use. We have shown that at current LHC sensitivity, the parameter space is split between regions where the approximation is useful, and regions where the constraint on the coupling strength is too large for the approximation to be accurate.