Profile likelihood maps of a 15-dimensional MSSM

We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter, the Planck relic density measurement and data from accelerator searches. We provide a detailed analysis of the rich phenomenology of this model, and determine the SUSY mass spectrum and dark matter properties that are preferred by current experimental constraints. We evaluate the impact of the measurement of the anomalous magnetic moment of the muon ($g-2$) on our results, and provide an analysis of scenarios in which the lightest neutralino is a subdominant component of the dark matter. The MSSM-15 parameters are relatively weakly constrained by current data sets, with the exception of the parameters related to dark matter phenomenology ($M_1$, $M_2$, $\mu$), which are restricted to the sub-TeV regime, mainly due to the relic density constraint. The mass of the lightest neutralino is found to be<1.5 TeV at 99% C.L., but can extend up to 3 TeV when excluding the $g - 2$ constraint from the analysis. Low-mass bino-like neutralinos are strongly favoured, with spin-independent scattering cross-sections extending to very small values, $\sim 10^{-20}$ pb. ATLAS SUSY null searches strongly impact on this mass range, and thus rule out a region of parameter space that is outside the reach of any current or future direct detection experiment. The best-fit point obtained after inclusion of all data corresponds to a squark mass of 2.3 TeV, a gluino mass of 2.1 TeV and a 130 GeV neutralino with a spin-independent cross-section of $2.4 \times 10^{-10}$ pb, which is within the reach of future multi-ton scale direct detection experiments and of the upcoming LHC run at increased centre-of-mass energy.


Introduction
The Large Hadron Collider (LHC) has delivered ∼ 20 fb −1 of integrated luminosity at √ s = 8 TeV, but evidence for new physics beyond the Standard Model (SM) is still lacking.
In particular, the data contain no signature of Supersymmetry (SUSY), which is the most widely studied theory of physics beyond the SM, as it may offer a solution to the hierarchy problem and to the dark matter problem of the universe. In light of the lack of a signal in direct searches for SUSY, the ATLAS and CMS collaborations have placed strong bounds on gluinos and squarks with masses 1 TeV. On the other hand, the recent discovery of a SM-like Higgs boson with a mass m h ∼ 125 GeV requires very large top squark sector masses, generally of several TeV. This excludes generic supersymmetric theories in which all the superpartners have masses below ∼ 1 TeV. The simplest SUSY realization, the Minimal Supersymmetric Standard Model (MSSM), has 126 Lagrangian parameters, including complex phases, which makes its phenomenological study impractical. If one applies a concrete mechanism that mediates SUSY breaking to the observable sector, then the number of parameters can be reduced significantly. This is for instance the case for models like the constrained MSSM (cMSSM) in which one demands universal scalar masses, gaugino masses and the trilinear couplings at a high energy scale. The cMSSM is certainly the most studied model in the literature and its viability with respect to the relevant available data has been assessed using several methods; from grid and random scans to -more recently -statistically convergent methods (both Bayesian and profile likelihood).
The LHC data have severely constrained this model, so much that it is in tension with the naturalness of the electroweak breaking at the correct scale since the SUSY-breaking parameters are pushed to large values. One exception is the focus point region where the weak scale is insensitive to variations in these parameters. However, this region is becoming increasingly constrained by direct dark matter searches, such as the XENON100 and LUX experiments. This conclusion also applies to less constrained models such as the non-universal Higgs mass model (NUHM) [1,2], the non-universal gaugino mass model (NUGM) [3] and the non-universal gaugino and Higgs mass model (NUGHM) [4].
One approach to address this issue is to avoid explicitly assuming a SUSY breaking mechanism. Instead, one can reduce the 126 MSSM parameters to 19 parameters by using phenomenological constraints, that define the so-called phenomenological MSSM (pMSSM) [5]. In this scheme, one assumes that: (i) All the soft SUSY-breaking parameters are real, therefore the only source of CP-violation is the CKM matrix. (ii) The matrices of the sfermion masses and the trilinear couplings are diagonal, in order to avoid FCNCs at the tree-level. (iii) First and second sfermion generation universality to avoid severe constraints, for instance, from K 0 −K 0 mixing.

JHEP09(2014)081
The pMSSM has been studied in the past using random scans [6][7][8], as well as Bayesian methods [9][10][11][12]. Both approaches have limitations. While appearing uniformly distributed in 1D and 2D projections, random scans in large-dimensional parameter spaces are actually highly concentrated in a thin shell of the hypersphere inscribed in the scan box (the "concentration of measure" phenomenon). This means that only a negligible fraction of the pMSSM parameter space is explored by random scans. Furthermore, such scans typically only retain points within e.g. 2σ cuts of the observed experimental constraints. Without the explicit use of a likelihood function, random scans have no way of directing the exploration towards more interesting regions of parameter space, i.e. regions where the likelihood is larger. The Bayesian approach is much more efficient, but the prior dependence of the posterior distribution can be very strong, especially for high-dimensional models such as the pMSSM with a large number of effectively unconstrained parameters.
In this paper, we adopt a Bayesian approach to scanning (using a full likelihood function and an algorithm that generates samples from the posterior distribution), but then derive profile likelihood maps -which are in principle prior-independent -for a more robust statistical interpretation. We perform a profile likelihood analysis of a simplified version of the pMSSM with 15 parameters, which we refer to as the MSSM-15. The number of model parameters is reduced by some reasonable assumptions which retain the most relevant phenomenological aspects of the pMSSM in terms of collider and dark matter searches. Our likelihood includes all available accelerator constraints and a newly developed technique to approximate joint constraints from inclusive searches at the LHC. We also adopt cosmological (from Planck) and astro-particle physics constraints (from direct detection experiments) that apply to the lightest neutralino, discussing both the case where it constitutes the entirety or just part of the dark matter in the universe (see e.g. refs. [13][14][15] and references therein).
This paper is organised as follows. We introduce our theoretical model and statistical approach in section 2. In section 3 we present the profile likelihood maps from our scans, both with and without LHC constraints. Section 4 contains our conclusions. In the appendix, we describe our approach to approximating the likelihood for ATLAS 0-lepton and 3-lepton inclusive searches, and we demonstrate the statistical convergence of our profile likelihood maps.

Theoretical model
If one is mainly interested in the phenomenology of the MSSM the number of parameters can be significantly reduced using a number of reasonable simplifying assumptions. In this paper, we study such a phenomenological version of the MSSM that is described by 15 model parameters, which we call the MSSM-15. This is motivated by the present lack of experimental evidence for SUSY: while highly constrained models as the cMSSM are under pressure in the light of the recent negative sparticle searches at the LHC, there is no experimental indication that one requires the full freedom of the 19-dimensional pMSSM at present.

JHEP09(2014)081
The sfermion soft-masses are defined as in the pMSSM. Namely, the sfermion mass sector is completely described by the first and second generation squark mass m Q , the third generation squark masses m Q 3 , m U 3 and m D 3 , the first and second generation slepton mass m L and the third generation slepton masses m L 3 and m E 3 (where m U 3 , m D 3 and m E 3 are the superpartners of the right-handed third-generation quarks and leptons, respectively).
The trilinear couplings of the sfermions enter in the off-diagonal parts of the sfermion mass matrices. Since these entries are proportional to the Yukawa couplings of the respective fermions, we can approximate the trilinear couplings associated with the first and second generation fermions to be zero. Furthermore, due to the large top Yukawa coupling, the trilinear coupling of the top A t is in general more relevant than the trilinear couplings of the other third generation couplings. Therefore, we assume unification of the bottom and tau trilinear couplings at the GUT scale, so that both are described by the same parameter A 0 ≡ A b = A τ . 1 After the application of the electroweak symmetry breaking conditions, the Higgs sector can be fully described by the ratio of the Higgs vacuum expectation values tan β and the Higgs masses m 2 H i . Instead of the Higgs masses, we choose to use the higgsino mass parameter µ and the mass of the pseudoscalar Higgs m A as input parameters, as they are more directly related to the phenomenology of the model. The final ingredient of our model are the three gaugino masses: the bino mass M 1 , the wino mass M 2 and the gluino mass M 3 .
The above parameters describe a 15-dimensional realisation of the pMSSM which encapsulates all phenomenologically relevant features of the full model that are of interest for dark matter and collider experiments. The model parameters are displayed in table 1, along with their prior ranges (see next section). All of the input parameters are defined at the SUSY scale √ mt 1 mt 2 , with the exception of A 0 , which is defined at 10 16 GeV and run to the SUSY scale using the RGEs.
In this scenario, in principle, there are five arbitrary phases embedded in the parameters M i (i = 1, 2, 3), µ and the one corresponding to the trilinear couplings provided we assume that the trilinear matrices are flavour diagonal. However, one may perform a U(1)-R rotation on the gaugino fields to remove one of the phases of M i . For consistency with the literature we choose the phase of M 2 to be zero. Note that this U(1) R transformation affects neither the phase of the trilinear couplings, since the Yukawa matrices being real fixes the phases of the same fields that couple to the trilinear couplings, nor the phase of µ. Therefore in the CP-conservation case M 1 , M 3 , µ and the trilinear couplings can be chosen both positive and negative.

Scanning algorithm and profile likelihood maps
We adopt a Bayesian approach to sample the MSSM-15 parameter space, and then use the resulting posterior samples to produce profile likelihood maps. This is because the large dimensionality of the MSSM-15 and the relatively weak constraints imposed by experimental data result in a (Bayesian) posterior distribution suffering from severe prior-dependent volume effects. These would make the interpretation of the Bayesian posterior problematic.

Flat priors
Log priors (170.6, 175.8) Table 1. MSSM-15 parameters and top mass value used in this paper and prior ranges for the two prior choices adopted in our scans. "Flat priors" are uniform on the parameter itself (within the ranges indicated), while "Log priors" are uniform in the log of the parameter (within the ranges indicated).
We therefore focus on the profile likelihood (PL) for one or two parameters at the time. The profile likelihood is obtained by maximising the likelihood function over the parameters that are not displayed. For example, for a single parameter of interest θ i the other parameters Ψ = {θ 1 , . . . , θ i−1 , θ i+1 , . . . , θ n } are eliminated from the 1D profile likelihood by maximising over them: where L(θ i , Ψ) is the full likelihood function. Our samples of the MSSM-15 parameter space are distributed according to the posterior pdf, but we simply ignore their density in producing profile likelihood maps by maximising over the hidden variables. Confidence intervals/regions from the resulting 1D/2D profile likelihood maps are determined by adopting the usual Neyman construction with the profile likelihood ratio λ(θ i ) as test statistics: whereΨ is the conditional maximum likelihood estimate (MLE) of Ψ with θ i fixed and θ i ,Ψ are the unconditional MLEs. Values of the ∆χ 2 = −2 ln λ(θ i ) corresponding to 68%, 95% and 99% confidence intervals are obtained from Wilks' theorem. The generalisation to 2D PL maps is straightforward. We have upgraded the publicly available SuperBayeS-v1.5 package [19][20][21][22] to a new version, SuperBayeS-v2.0, which will shortly be released to the public. 2 This latest version of SuperBayeS is interfaced with SoftSUSY 3.3.10 [23,24] as SUSY spectrum calculator, MicrOMEGAs 2.4 [25,26] to compute the abundance of dark matter, DarkSUSY 5.0.5 [27,28] for the computation of σ SĨ χ 0 1 −p and σ SD χ 0 1 −p , SuperIso 3.0 [29, 30] to compute δa SUSY µ and B(D) physics observables, SusyBSG 1.5 for the determination of BR(B → X s γ) [31,32] and FeynHiggs 1.9 [33] to compute the Higgs production cross-sections and decay amplitudes. For the computation of the electro-weak observables described in section 2.5 we have implemented the complete one-loop corrections and the available MSSM two-loop corrections as well as the full Standard Model results [34].
SuperBayeS-v2.0 is interfaced with the publicly available code MultiNest v2.18 [35,36], which we use to obtain samples from the posterior distribution. As a multi-modal implementation of the nested sampling algorithm [37], MultiNest is an extremely efficient scanning algorithm that can reduce the number of likelihood evaluations required for an accurate mapping of the Bayesian posterior probability distribution function by up to two orders of magnitude with respect to conventional MCMC methods. This Bayesian algorithm, originally designed to compute the model likelihood and to accurately map out the posterior, is also able to reliably evaluate the profile likelihood, given appropriate settings, as demonstrated in [38].
As motivated above, we use the posterior samples to extract 1D and 2D (priorindependent) profile likelihood maps by maximising the likelihood over all other parameter dimensions. This however requires a much larger number of samples than marginalization of the posterior [38], as well as dedicated settings of the MultiNest code. We adopt the recommendations of ref. [38], and use a tolerance parameter tol = 10 −4 and a number of live points N live = 2 × 10 4 . To further increase the resolution of our profile likelihood maps, we store the likelihood and parameter values of all likelihood evaluations performed by MultiNest. This includes all samples that would usually be discarded because they do not lie above the iso-likelihood contour in the replacement step in the nested sampling algorithm. This increases the number of likelihood values by a factor > 20, and allows for a higher-resolution profile likelihood mapping, especially in the tails of the profile likelihood, at no additional computational cost.

Prior choices and ranges
In our approach, the prior becomes a device to concentrate the scan in certain regions of parameter space. We adopt two very different prior distributions: "Flat priors" are uniform on all model parameters, while "Log priors" are uniform in the log of all model parameters, except for tan β, on which a uniform prior is chosen (see table 1). Flat priors  Figure 1. 1D prior distributions for our two choices of priors for the three gaugino masses and the first and second generation squarks (top row). The bottom row depicts the implied distribution for some observable of interest, namely the relic abundance, the neutralino mass, the gluino mass and the spin-independent neutralino-proton scattering cross-section. The distributions were obtained from a scan including no experimental constraints; unphysical points were discarded. When samples from both priors are merged in our profile likelihood analysis, we obtain a detailed sampling of the entirety of the parameter space. tend to concentrate sampling towards large values of the parameters (as most of the prior volume lies there), while log priors concentrate the scan in the low-mass region (as every decade in the parameter values is given the same a priori probability under this metric). We then merge the chains resulting from the flat and log prior scans to achieve a reliable mapping of the (prior-independent) profile likelihood function, as advocated in ref. [38].
Our profile likelihood maps, which we obtain from merging the samples gathered with both priors, explore in detail both the low-mass and the high-mass region, for a more thorough scanning of the entire parameter space. This is demonstrated in figure 1, which shows the 1D prior distributions (marginalised) for a few representative quantities, obtained from a scan that does not include any experimental constraints (constant likelihood function). As in previous analyses (e.g. refs. [22,39,42]) we assign a zero likelihood to unphysical points in parameter space, that do not fulfil the conditions of radiative electroweak symmetry breaking or lead to tachyonic states. Additionally, we discard all points for which the lightest neutralino is not the LSP. The distributions shown in figure 1 were obtained after discarding such unphysical points, and thus can appear non-flat even in the variables on which a flat prior has been imposed.
In terms of prior ranges, we set the upper limit for the gaugino masses, µ and m A to 5 TeV. For the squark and slepton masses and trilinear couplings we choose an upper prior boundary of 10 TeV, to allow for large stop masses as favoured by the Higgs mass mea- surement. For consistency, the same upper boundary is applied to the trilinear couplings A t and A 0 . All of the above choices for the upper boundary can be justified by considering that the profile likelihood becomes approximately flat below the boundaries, which implies that further increasing the range would have no qualitative impact on our results. For the ratio of the Higgs vacuum expectation values we chose a prior range tan β = [2,62], ensuring that the Yukawa couplings do not become non-perturbatively large.
In accordance with previous analyses (see ref. [39]), we run 10 different MultiNest scans for the "All data" case and 5 scans for each of the "without g − 2" and "Planck upper limit" cases. We compared the best-fit points and profile likelihood function resulting from the different scans and found consistent results (within numerical noise). This verifies that a reliable exploration of the MSSM-15 parameter space is achieved and confirms the robustness of our profile likelihood results.
The profile likelihood maps presented in this work are obtained from a combined total of 261M (all data), 124M (excluding the g−2 constraint) and 91M (relaxing the requirement that the neutralino is the only dark matter component) likelihood values. We estimate that the total computational effort expended for these analyses is approximately 105 CPU years.

Nuisance parameters and astrophysical quantities
Residual uncertainties on the measured value of the top mass, 3 M t = 173.2±0.87 GeV [18], can have a significant impact on the results of SUSY analyses [20]. Therefore, in addition to the model parameters described above we include M t as a nuisance parameter in our scans. We adopt a flat prior for this quantity, and include a Gaussian likelihood function on M t , with mean and standard deviation chosen according to recent experimental measurement above (see table 2 [20]. However, this effect is subdominant compared to the impact of the top mass. Therefore, in order to keep the dimensionality of the scanned parameter space as small as possible to ensure statistical convergence of our results, we fix these three SM quantities to their experimentally measured values. In previous analyses of lower dimensional SUSY models, we included additional nuisance parameters in the analysis to account for residual (potentially large) uncertainties in astrophysical and nuclear physics quantities entering the likelihood for direct detection searches. A detailed discussion of the relevant uncertainties, and our parameterisation of the local astrophysical and nuclear physics was given in ref. [22]. As shown explicitly in refs. [22,42], the effect of marginalizing or maximising over these uncertainties is relatively small, and the main conclusions remain qualitatively unchanged when excluding the corresponding nuisance parameters from the scans. Therefore, we choose to fix these quantities in this analysis, again for the sake of limiting the dimensionality of our parameter space.

JHEP09(2014)081
The relevant astrophysical quantities are the local dark matter density, ρ loc , and three quantities entering the weakly interactive massive particle (WIMP) velocity distribution. Following our previous work [22,39,42], for the WIMP velocity distribution we use the parameterisation given in eq. (3.3) of ref. [22]. The three velocities entering this equation are the escape velocity v esc = 544 km/s, the local circular velocity v lsr = 30 km/s and the velocity dispersion v d = 282 km/s. In this work, we fix these velocities to the above values, as well as the local dark matter density to ρ loc = 0.4 GeV/cm 3 , following refs. [22,43].
The most important hadronic uncertainties arise in the computation of the WIMPproton scattering cross-sections from the SUSY input parameters. The cross-section for spin-independent elastic scattering of neutralinos off atomic nuclei depends on the hadronic matrix elements f Tu , f T d and f Ts , which parameterise the contributions of the light quarks to the proton composition f Tq ∝ N |qq|N . These matrix elements can not directly be measured, but instead there are two different approaches to calculate the values of these quantities: they can either be calculated directly using lattice QCD calculations, or derived from experimental measurements of the pion-nucleon sigma term, that can be extrapolated to zero momentum exchange, as required for the calculation of σ SĨ χ 0 1 −p , using chiral perturbation theory.
For f Tu and f T d estimates from the two approaches are in reasonably good agreement, so that we use the recent results presented in ref. [44] and fix them to the experimental central values: f T u = 0.0457 ± 0.0065 [44], f T d = 0.0457 ± 0.0065 [44]. The strange content of the nucleon is much more uncertain, and different groups have found very different results for the scalar strange-quark matrix element f Ts . While there still exist strong differences in the results of different groups extracting f Ts from π − N scattering data using chiral perturbation theory, recent results from various lattice QCD computations of f Ts tend to be in good agreement both with each other, and with a recent analysis of pion-nucleon scattering data from the CHAOS group [45]. Therefore, in this work we use a recently determined average of various lattice QCD calculations f Ts = 0.043 ± 0.011 [46].
The spin-dependent neutralino-proton scattering cross-section depends on the contribution of the light quarks to the total proton spin ∆ u , ∆ d and ∆ s . For these quantities, we use results from a lattice QCD computation presented in [47], namely ∆ u = 0.787 ± 0.158, ∆ d = −0.319 ± 0.066, ∆ s = −0.02 ± 0.011 [47]. As above, we fix all quantities to their central values. These results are in agreement with experimental measurements, with the possible exception of ∆ s , which however gives a sub-dominant contribution to the total cross-section. For a recent discussion of the discrepancy between the two approaches and the impact of the resulting uncertainties on predictions for the detectability of SUSY with direct and indirect detection experiments, see ref. [48].

Experimental constraints
We implement experimental constraints with a joint likelihood function, whose logarithm takes the following form: ln L Joint = ln L EW + ln L B(D) + ln L g−2 + ln L Ωχh 2 + ln L DD + ln L Higgs + ln L SUSY , (2.3) where L EW represents electroweak precision observables, L B(D) B and D physics constraints, L g−2 measurements of the anomalous magnetic moment of the muon, L Ωχh 2 measurements of the cosmological dark matter relic density, L DD direct dark matter detection constraints, L Higgs LHC measurements of the properties of the Higgs boson and L SUSY ATLAS sparticles searches. We discuss each component in turn. The values used are also summarised in table 2.

Electroweak precision observables
Constraints on several observables obtained from Z-pole measurements at LEP [49] are included: the constraint on the effective electroweak mixing angle for leptons sin 2 θ eff , the total width of the Z-boson Γ Z , the hadronic pole cross-section σ 0 had , as well as the decay width ratios R 0 l , R 0 b , R 0 c . We do not include the constraints on the asymmetry parameters in the analysis, since we found that supersymmetric contributions to their value were very small and well below the experimental error. Due to the strong correlation between these parameters and sin 2 θ eff , the inclusion of these constraints would qualitatively not change the profile likelihood contours. In addition, we also use the measurement of the mass of the W boson m W from the LEP experiment [49]. We apply a Gaussian likelihood for all of these quantities, with mean and standard deviation as in table 2.

B and D physics constraints
Several B and D physics constraints are applied with a Gaussian likelihood, as summarised in table 2. We include a number of results obtained by the Heavy Flavor Averaging Group, including the measurement of the branching fraction of the decay BR(B → X s γ), the ratio of the measured branching fraction of the decay B u → τ ν to its branching fraction predicted in the SM, and the decay branching fraction BR(D s → τ ν) [50]. Additionally, we include the ratio of the measurement of the B 0 s −B 0 s oscillation frequency to its SM value R ∆M Bs = 1.04 ± 0.11 [51,52]. We also include the constraint on the integrated forward-backward asymmetry A F B (B → K * µ + µ − ) in the bin q 2 ∈ [1, 6] GeV 2 , which has been shown to have a powerful impact on simple SUSY models [53].
In addition, we include the latest measurement of the rare decay BR(B s → µ + µ − ) from the LHCb experiment at the LHC. Using a combination of 1.0 fb −1 data at √ s = 7 TeV collision energy and 1.1 fb −1 data at √ s = 8 TeV collision energy, collected in 2011 and 2012, the LHCb collaboration reported an excess of decay events with respect to the background expectation, leading to the value BR(B s → µ + µ − ) = (3.2 +1.5 −1.2 ) × 10 −9 with a 3.5σ signal significance [54]. 4 We apply this constraint as a Gaussian likelihood function with a conservative (symmetric) experimental uncertainty of 1.5 × 10 −9 and a theoretical error of 0.38 × 10 −9 [57].

Cosmological relic abundance
We include the Planck constraint on the dark matter relic abundance in our analysis. When assuming that the neutralino makes up all of the dark matter in the universe, we apply the result from Planck temperature and lensing data Ω χ h 2 = 0.1186 ± 0.0031 as a Gaussian likelihood in the analysis [61]. We also add a (fixed) theoretical uncertainty, τ = 0.012, in quadrature, in order to account for the numerical uncertainties entering in the calculation of the relic density from the SUSY parameters.
When we allow for the possibility that neutralinos are a sub-dominant dark matter component, then the Planck relic density measurement is applied as an upper limit. As shown in the appendix of [62], the effective likelihood for this case is given by the expression where L 0 is an irrelevant normalization constant, r ⋆ ≡ µ Planck /σ Planck and Ω χ h 2 is the predicted relic density of neutralinos as a function of the model parameters.
When neutralinos are not the only constituent of dark matter, the rate of events in a direct detection experiment is proportionally smaller as the local neutralino density, ρ χ , is now smaller than the total local dark matter density, ρ DM . The suppression is given by the factor ξ ≡ ρ χ /ρ DM . Following [63], we assume that ratio of local neutralino and total dark matter densities is equal to that for the cosmic abundances, thus we adopt the scaling Ansatz ξ ≡ ρ χ /ρ DM = Ω χ /Ω DM . (2.5) For Ω DM we adopt the central value measured by Planck, Ω DM = 0.1186 [61].

Direct detection constraints
We include the latest constraints from the XENON100 direct detection experiment, obtained from 224.6 live days and 34 kg fiducial volume [64]. The data set contained two candidate WIMP scattering events inside the signal region, compatible with the expected number of background events b = 1.0 ± 0.2. The resulting XENON100 exclusion limits currently places tight limits in the plane of WIMP mass m χ vs. spin-independent crosssection σ SĨ χ 0 1 −p , as well as in the (m χ , σ SD χ 0 1 −n ) plane, and also places competitive constraints on σ SD χ 0 1 −p as a function of WIMP mass [64,65]. We use an approximate XENON100 likelihood function to incorporate these data in our analysis. For a detailed description of our approximate XENON100 likelihood function we refer the reader to refs. [22,39].
In previous studies of the cMSSM [39,42] and the NUHM [39] we neglected the contribution of spin-dependent neutralino-nucleon scattering to the total number of events, since in these constrained models this contribution was always subdominant compared to the spin-independent event rate, and in the favoured regions the number of events from spin-dependent scattering was much smaller than 1. However, in models like the MSSM-15 -11 -JHEP09(2014)081 considered here there are regions of parameter space in which the spin-dependent scattering event rate can exceed the spin-independent contribution. Therefore, we now include the spin-dependent contribution to the event rate, and calculate the constraints by using R tot = R SD +R SI . For the axial-vector structure functions entering into the spin-dependent differential WIMP-nucleus cross-section we use the results by ref. [66], as recommended by the XENON100 collaboration [65].
While this work was being finalised, the LUX collaboration reported results from a search for WIMPs based on 85.3 live days of data and 118 kg fiducial volume [67]. No significant excess above the background expectation was observed, and new limits on the WIMP properties were derived. The resulting limit on the spin-independent WIMP-proton interaction improved on the XENON100 limits used here, by a factor of ∼ 2 for WIMP masses m χ 50 GeV and by a larger factor for lighter WIMPs. We do not implement the LUX results in this paper, but notice that their impact is comparatively small given the many orders of magnitude spanned by the predictions of the MSSM-15 in the relevant spin-independent vs mass plane (see figure 7, upper left panel).

Anomalous magnetic moment of the muon
The experimentally measured value of the anomalous magnetic moment of the muon [68] a µ ≡ (g − 2)/2 shows a 3.6σ discrepancy with the value predicted in the SM. Therefore, a strong supersymmetric contribution is required in order to explain the discrepancy δa SUSY µ = (28.7 ± 8.2) × 10 −9 , where experimental and theoretical errors have been added in quadrature. However, there remain strong theoretical uncertainties in the computation of the SM value of the muon anomalous magnetic moment, most importantly in the computation of the hadronic loop contributions. Additionally, the discrepancy between the experimental measurement and the SM value is reduced to 2.4σ when relying on τ data instead of e + e − data [68].
In previous global fits analyses of constrained MSSM scenarios, such as the cMSSM [39,42] and the NUHM [39], it was found that the constraint on g − 2 can play a dominant role in driving the profile likelihood results. In order to evaluate the dependence of the our results on this somewhat controversial constraint, we repeat our analysis after excluding the g − 2 constraint from the likelihood.

Higgs properties
Both ATLAS and CMS have recently reported new results for the measurement of the mass of the Higgs boson. CMS reported a measured value of m h = 125.8 ± 0.4 ± 0.4 GeV, where the first error is statistical and the second error is systematic [71]. This result was derived from a combination of 5.1 fb −1 data at √ s = 7 TeV collision energy, and 12.2 fb −1 data at √ s = 8 TeV collision energy. The ATLAS collaboration found a value m h = 125.5 ± 0.2 +0.5 −0.6 GeV, derived from a combination of 4.8 fb −1 √ s = 7 TeV data and 20.7 fb −1 √ s = 8 TeV data [72]. We use a weighed average of the ATLAS and CMS measurements, resulting in m h = 125.66±0.41 GeV. We add a theoretical error of 2 GeV [73] in quadrature.
The observation of the Higgs boson in several channels has allowed the measurement of some of its couplings with relatively good accuracy. The standard way to infer the couplings -12 -

JHEP09(2014)081
of the produced Higgs boson is to consider their deviation from the SM expectation. For a given channel this is parametrized through the signal strength parameter µ. For the h → XX channel one has This quantity is compared directly with experimental measurements. Note that from here on what we call "Higgs" in the MSSM context, we are referring to the lightest CP-even Higgs.
The channels considered in the likelihood function are listed in table 2. We apply the experimental constraints obtained by the CMS collaboration. For the γγ [74], W + W − [75], ZZ [76] and τ + τ − [77] decay modes the constraints were derived from datasets corresponding to an integrated luminosity of ∼ 5 fb −1 at √ s = 7 TeV collision energy and ∼ 19 fb −1 at √ s = 8 TeV collision energy. The constraint on the h → bb [78] decay channel was derived from ∼ 5 fb −1 integrated luminosity at √ s = 7 TeV and ∼ 12 fb −1 integrated luminosity at √ s = 8 TeV collision energy.

ATLAS SUSY searches
The SUSY searches constraints applied come from bounds on SUSY masses from LEPII and Tevatron for which we apply the likelihood as outlined in [19] and from LHC searches looking for 0-lepton and multi-jets with missing transverse energy and 3-leptons with missing transverse energy in the ATLAS experiment, both with data recorded at √ 7 TeV and a total integrated luminosity of 4.7 fb −1 [79,80]. Details about the construction and validation of the LHC likelihood function associated with these two channels are given in appendix A and appendix B, respectively. The likelihood implementation has been done in the ROOT framework through the RooFit/RooStats packages.
For each likelihood evaluation, we simulate the SUSY kinematical distributions of 10 4 events with PYTHIA 6.4 [81] using the ATLAS MC09 tune [82]. The parton distribution functions are obtained from the CTEQ6L1 set [83]. The SUSY cross-sections for gluino and squarks production are normalized by NLO K-factors in the strong coupling constant, including the resummation of soft gluon emission at next-to-leading-logarithmic (NLO+NLL) accuracy with NLL-fast 1.2 [84][85][86][87] and outside the available NLL-fast grid by PROSPINO2 [88,89] at NLO. For the electroweakino production we use PROSPINO2 which provides a NLO calculation. The detector simulation employed is DELPHES3 [90]. Details about the efficiencies validation are provided in appendix B.

Results
In the following sections we present the combined impact of all present day constraints shown in table 2 on the MSSM-15. In addition to the analysis including all available data, we also show results for two other cases. In particlar, we present results for an analysis excluding the g − 2 constraint, in order to evaluate the impact of this controversial measurement on our profile likelihood results. In a third analysis we relax the requirement -13 -

JHEP09(2014)081
that the lightest neutralino is the only dark matter component, by adopting the Planck measurement of the dark matter relic density as an upper limit. We start by presenting 1D and 2D profile likelihood results for all three cases excluding LHC constraints on the sparticle masses and the Higgs production cross-sections (note however that the LHC measurement of the Higgs mass, m h , is included in all of the results). Since this paper presents the first high-resolution profile likelihood analysis of the MSSM-15, we discuss in detail the favoured model phenomenology, in particular the different neutralino compositions that can be achieved throughout the parameter space and the dark matter detection prospects. In the final section we present the impact of constraints from LHC SUSY searches and Higgs signal strengths measurements on this parameter space, obtained with a simplified statistical treatment. A full profile likelihood analysis of the MSSM-15 including all LHC constraints is beyond the scope of this paper, and is the focus of a dedicated work [91].

Global fits from all data and excluding g − 2
We begin by showing in figure 2-4 the combined impact of the present day constraints shown in table 2, with the exception of the LHC constraints on SUSY and the Higgs production cross-sections (which are discussed separately below). We compare results for the analysis including all data (red), and for the analysis excluding the g − 2 constraint (purple); the encircled crosses show the corresponding best-fit points. For observable quantities, the applied likelihood function is shown in black. Figure 2 shows the 1D profile likelihood results for the 15 input model parameters and the top mass, figure 3 shows results for the relevant observables and figure 4 shows results for some SUSY quantities of interest. From here on we will refer to the lightest neutralino as "neutralino", for brevity. Figure 2 shows that most of the MSSM-15 model parameters are relatively weakly constrained. We start by discussing the 1D profile likelihood (PL) functions for the parameters entering at tree level into the electro-weakino (EWK) sector, namely M 1 , M 2 , µ and tan β.

Profile likelihood for the MSSM-15 parameters
The bino mass |M 1 | shows a clear preference for relatively low values, peaking at ∼ 50 GeV. In this region, the neutralino is bino-like and annihilates mainly into a pair of fermions through Z/h-funnels in the early universe. When |M 1 | takes values of O(100 GeV) the neutralino gets some mixing with higgsinos, so that its relic density is reduced to the experimentally measured value by co-annihilations with the second lightest neutralino and the lightest chargino. Notice that at low mχ0 values, in tension with the muon g − 2 constraint and, to a lesser extent, with several flavor physics observables, such as BR(B → X s γ), The wino mass M 2 is only mildly constrained from below by the LEP constraint on the chargino mass [92], and it peaks at values around a few hundred GeV. Above this value, we observe a moderate decrease of the PL towards the boundary of the prior. The shape of the 1D PL for M 2 is mostly driven by the muon g−2 constraint, whose MSSM contributions are dominated by chargino-sneutrino and neutralino-smuon loop diagrams. One can write the chargino-sneutrino contribution, which often is the dominant one, as follows [93] δa µ (W ,H,ν µ ) ∼ 15 × 10 −9 tan β 10 where f C is a loop function with maximum value f C = 1/2 when the masses in the loop are degenerate. When M 2 and µ are of O(100 GeV) and tan β of O(10), the contribution becomes O(10 −9 ), which can explain the muon g − 2 "anomaly" provided that sgn(M 2 , µ) > 0 and the smuon/sneutrino soft-masses, which we assume to be universal, are O(100 GeV). The degree of decoupling allowed depends on the value of tan β (note that values above 60 are forbidden by imposing the perturbativity of the bottom Yukawa coupling). While in general winos with masses 1 TeV are decoupled, the neutralino-smuon contribution can still give a large contribution to g − 2 and thus give a good fit. Since we have set M 2 > 0, µ > 0 is favoured. Notice that the best-fit point fulfils this condition. It is worth mentioning that the wino mass can also play an important role in new physics contributions to the Wilson coefficient C 7 which is a fundamental quantity in the most relevant flavor observables entering into our analysis [94]. 5 The 1D PL for the higgsino mass |µ| shows an almost symmetrical distribution about zero. Like the wino-mass, |µ| is constrained from below by the LEP constraint on the chargino mass [92]. Relatively small values |µ| 1 TeV are strongly favoured, while large values of |µ| are disfavoured. This is because the SM predictions for flavour physics observables such as ∆ 0− and A F B (B → K * µ + µ − ) are discrepant with the experimental measurement at ∼ 1 − 2σ level, and thus require rather large new physics contributions to C 7 . Note that, in contrast, for BR(B → X s γ) the discrepancy is below 1σ. For intermediate/large tan β values the leading SUSY corrections scale as 1/m 2 χ , so that light higgsinos are preferred. It is worth noticing that the PL drops much faster for larger values of the higgsino mass than for the wino mass. This is due to the fact that the higgsinos mass enters in both the chargino-sneutrino and the neutralino-smuon diagrams that contribute to δa µ . Moreover, one can see that there is not a full decoupling at O(1 TeV) higgsino masses for the negative branch, due to an enhancement of the pure-bino contribution for large higgsino masses as a result of a large left-right mixing in the smuon mass matrix. One can write this contribution as follows The Wilson coefficient C9 is also potentially important, though within the MSSM its role is diluted [95].

JHEP09(2014)081
where f N is a loop function with maximum value f N = 1/6 when the masses in the loop are degenerated. A sizeable contribution can be achieved provided that sgn(M 1 µ) > 0. While the chargino-sneutrino contribution to g − 2 in eq. (3.1) prefers µ > 0, large positive µ tend to lead to tachyons in the Higgs sector due to the fact that the sign of µ enters into the RGE of the bilinear soft breaking term, resulting in a subtle preference for negative values of µ. Recall that the boundary conditions are applied at the scale √ mt 1 mt 2 and there is a running to m Z which is where the SUSY thresholds corrections are applied. The sign of the associated β function tends to be positive and large unless the pseudoscalar mass m A is not too large and/or the gaugino mass M 3 > 0 is large. Note that the 1D PL for negative µ is shifted to slightly larger values compared to the positive branch, as very small values of |µ| for µ < 0 would lead to a sizeable negative contribution to g − 2 from eq. (3.1).
The 1D PL for tan β is suppressed below ∼ 10, mainly due to the Higgs mass measurement, since at tree level m h ≤ m Z |cos2β|. Values close to the upper prior boundary are also slightly disfavoured as they are close to the non-perturbativity limit of the bottom Yukawa coupling. This effect is stronger when the SUSY threshold corrections to the bottom mass are large, which occurs mainly for a low-mass SUSY spectrum.
The 1D PL for the pseudoscalar Higgs mass, m A , is severely suppressed for values 1 TeV, mainly due to the Higgs mass measurement, but also due to the LHCb measurement of BR(B s → µ + µ − ), which is in good agreement with the SM expectation. Since SUSY contributions enter as ∝ tan 6 β/m 2 A and larger values of tan β are favoured, heavier pseudoscalars masses are preferred. It implies that m A ≫ m Z and the SUSY decoupling regime [96] is favoured in the model which leads to . We now turn to the discussion of the sfermion sector. The 1D PL for the first and second generation slepton mass, m L , shows a clear preference for relatively low values when g − 2 is included, as follows from the discussion above. Such a preference is almost entirely driven by the g − 2 constraint, as is clear from the comparison with the corresponding PL for the analysis excluding the constraint on the anomalous magnetic moment of the muon. Both of the third generation slepton soft-masses m L 3 and m E 3 remain essentially unconstrained, as their PL functions are almost flat. The slight preference for low values is due to the impact of relatively light staus in the electroweak precision observables (EWPOs). Similarly, relatively small m Q , m U 3 and m D 3 are somewhat preferred, as low values of these quantities lead to greater freedom to satisfy the constraints on several constrained flavour observables. In contrast, the 1D PL for m Q 3 is almost flat up to ∼ 6 TeV, as in general TeV-scale values of m Q 3 are required to achieve a good fit to the constraint on the lightest Higgs mass.
The top trilinear coupling A t shows a symmetric PL around 0. We have checked that the peaks correspond to the maximal mixing scenario where ). In the maximal mixing region, m h ∼ 125 GeV can be achieved even for relatively small stop masses, which in general are preferred by the SM precision observables.
Finally, the gluino mass |M 3 | is constrained from below due to the Tevatron lower limit m gluino > 289 GeV [97]. Above this, the distribution is nearly flat with a slight suppression -18 -

JHEP09(2014)081
near the prior boundary. Very large gluino masses are expected to be disfavoured because they tend to induce the presence of tachyons in the staus and sbottoms, due to the RGE running of the trilinear coupling A 0 . Notice that the gluino plays a role both in higher loop corrections to flavor physics observables and at the level of the RGEs of SUSY parameters.
Exclusion of the muon g−2 constraint has a strong impact on the electroweakino sector, in particular on the 1D PL for the wino mass M 2 and the universal slepton mass m L , both of which enter in the contribution from the chargino-sneutrino loop in eq. (3.1). The 1D PL for the bino and higgsino masses M 1 and µ are also affected, albeit to a lesser extent. Dropping the g−2 constraint makes the data more compatible with heavier winos, and hence heavy binos are less disfavoured in this scenario. Additionally, upon exclusion of the g −2 constraint the positive branch of µ is no longer favoured, and indeed negative values of µ are preferred.
The most remarkable difference in the PL for the sfermion soft-masses occurs for the mass parameters related to the stop spectrum. Namely, the 1D PL for m Q 3 and m U 3 peak at small values, while large masses are significantly suppressed. As mentioned above, the Wilson coefficient C 7 plays a fundamental role in the most relevant flavor observables entering into our analysis, most importantly the isospin asymmetry ∆ 0− . As can be seen in figure 3 below, after exclusion of the g − 2 constraint the 1D PL for ∆ 0− peaks much closer to the experimentally measured value. Removing the g − 2 constraint from the analysis leads to greater freedom to satisfy the experimental constraint on ∆ 0− . In particular, Higgsino-stop loops can lead to a sizeable contribution to C 7 [98] where f 7 is a loop function. For small m Q 3 , medium m U 3 and sizeable tan β, δC 7 becomes large. Additionally, for sgn(µA t ) < 0, the sign of this loop contribution is opposite to the SM contribution [98], and values of ∆ 0− in good agreement with the experimental constraint can be achieved. The requirement that sgn(µA t ) < 0 also explains the preference for the peak in the positive branch of A t , which is clearly favoured with respect to negative values. We point out that C 7 also enters in a range of other flavour observables, in particular BR(B → X s γ). In contrast to the isospin asymmetry, the measurement of this quantity is in excellent agreement with the SM predictions, so that large SUSY contributions to C 7 are disfavoured by this constraint. Note that we use the SusyBSG code for the computation of BR(B → X s γ), while SuperIso is used to computed ∆ 0− . We caution that, for some finetuned points, the simultaneous achievement of a good fit to BR(B → X s γ) and ∆ 0− (and other flavour observables) can be a numerical effect, related to differences in the numerical implementation of the C 7 calculation in these codes. 6

Profile likelihood for observable quantities
The 1D profile likelihood for the observables, shown in figure 3, generally agree well with the likelihood functions imposed on these quantities (shown in black). In contrast to the tension that is observed in simpler SUSY models, such as the cMSSM and the NUHM, the experimentally measured values of δa SUSY µ and BR(B → X s γ) can simultaneously be achieved in the MSSM-15. The Planck measurement of the relic density is also well fit.
The EWPOs most sensitive to SUSY effects within the MSSM-15 are m W , sin 2 θ eff and Γ Z , with the most important role played byt/b and -to a lesser extent -by the chargino and neutralino sectors. Their dependence on the top mass is also significant [34].

JHEP09(2014)081
The SM prediction for m W and Γ Z is marginally (at 1σ level) below the experimental value, assuming the current central value of the top mass. Since SUSY contributions are constructive, light squarks and/or light EWKinos are required to fit the experimental values of m W and Γ Z , as also favoured by other experimental constraints. This leads to a good match between the 1D PL and the likelihood function for these quantities. The sin 2 θ eff case is different since SUSY corrections are destructive for this observable and the SM prediction is compatible at 1σ level with the measurements. Therefore a low SUSY spectrum pushes the predictions below the measurement which is what we observed in the PL. The other EWPOs considered in our analysis, namely σ 0 had , R l , R b and R c , are mostly insensitive to SUSY effects, so that their PL peak at the values predicted in the SM.
The relevant flavour observables are generally well fit. A notable exception is the isospin asymmetry ∆ 0− , which requires very large SUSY contributions (see ref. [39] for a discussion of the discrepancy between the experimental measurement of ∆ 0− and the values favoured in simple SUSY models). Note however that, for the reasons pointed out above, a good fit to this quantity can be obtained in the analysis excluding the g − 2 constraint. As expected, the 1D PL for δa SUSY µ becomes essentially flat when this constraint is dropped from the analysis.

Profile likelihood for the SUSY mass spectrum
The 1D PL for several SUSY masses are displayed in figure 4. The mass of the lightest Higgs boson measured by the LHC can easily be satisfied in the MSSM-15. This is a reflection of the large number of degrees of freedom of the model, which allow to maximize the tree-level contribution to the Higgs mass by pushing tan β to large values, while at the same time maximizing the leading 1-loop corrections either via heavy stops or maximal stop mixing.
The mass of the neutralino LSP is shown in the top-central panel. For the analysis including all data, the neutralino mass is constrained to mχ0 1 < 1.5 TeV at 99% confidence level. In contrast, the 1D PL for the analysis excluding the g − 2 constraint reaches significantly larger masses mχ0 1 ≤ 3.0 TeV. In both cases, the PL peaks at low values, where the neutralino is bino-like, with an almost identical best fit at mχ0 1 ≈ 60 GeV (see table 3 below). The bump in the neutralino PL around ∼ 1 TeV corresponds to a higgsinolike neutralino (see section 3.5 below for further details), and it is more pronounced for the case without g − 2, as expected from the above discussion. In the latter case, the small bump at mχ0 1 ∼ 2 TeV in the PL corresponds to a wino-like neutralino. The 1D PL for the mass of the lightest chargino stretches to large values, close to the prior boundary around ∼ 5 TeV imposed by the prior on the input parameters. Nevertheless, similarly to what was observed for the neutralino mass, small chargino masses are favoured. In contrast, the 1D PL for the average squark mass, the lightest stop mass and the gluino mass remain almost unconstrained. The shape of the 1D PL for these quantities is a direct consequence of the 1D PL for the corresponding soft masses and M 3 , respectively (discussed above).

Relic density as an upper limit
We now discuss the case where the Planck measurement of the relic density is applied as an upper limit, i.e. where the cosmological dark matter (DM) consists of multiple components (one of which is the neutralino LSP). While the 1D PL for the observables are slightly broader than for the analysis implementing the Planck measurement as a constraint, most of the 1D PL are qualitatively very similar for the two cases. Therefore, we focus on discussing the results for a few selected quantities that illustrate the phenomenological differences between these two analyses.
In figure 5 we show the 1D PL for several quantities of interest, comparing the analysis in which the Planck constraint is applied as an upper limit (blue) and in which it is applied as a constraint (red). With the exception of the parameters related to the electroweakino sector, the differences with respect to the single-component DM scenario are small. The top row of figure 5 shows results for M 1 , M 2 and µ. The bino mass M 1 now becomes essentially unconstrained over the entire prior range. The relaxation of the DM relic abundance constraint allows higher neutralino annihilation rates, so that light wino-like and higgsino-like neutralinos are now allowed, leading to heavier binos on average. Additionally, mixed neutralinos states (bino-higgsino, wino-higgsino and bino-wino-higgsino, so-called well-temped neutralinos [101]) are now allowed, as shown explicitly in section 3.5 below.
The 1D PL for the wino mass, M 2 , is almost identical for the two cases shown. Differences in the 1D PL for the higgsino mass, µ are found in the negative branch, for which larger (more negative) values are now allowed. This is because, for larger bino masses, large |µ| help to fit the muon g − 2 constraint (for sgn(M 1 µ) > 0), as discussed in the previous section.
In the bottom row of figure 5 we show the 1D PL for the relic density, the lightest neutralino mass and the lightest chargino mass. As expected, the 1D PL for Ω χ h 2 differs strongly for the two shown cases. When the Planck constraint is applied as an upper limit, the 1D PL stretches to very small values, almost five orders of magnitude below the measured dark matter relic density. While very small values Ω χ h 2 10 −3 are somewhat disfavoured, the PL peaks at Ω χ h 2 ∼ 10 −3 and is almost flat in the range 10 −3 < Ω χ h 2 < 10 −1 .
The 1D PL for the mass of the neutralino LSP and the lightest chargino mass are now confined to significantly lower values than for the analysis requiring that Ω χ ∼ Ω DM . The reason is that relatively light winos are allowed in the multi-component dark matter scenario, which makes it easier to fulfil the experimental constraints on a range of SM -23 -JHEP09(2014)081 2σ All data w/o g−2 Planck upper limit Figure 6. Contribution to the best-fit χ 2 from various observables, before including Higgs properties and LHC SUSY searches data. Once those data sets are added, the χ 2 values become 1054.32 (all data), 9.44 (without g − 2) and 267.52 (Planck as upper limit). Thus the overall pre-LHC best-fit point becomes ruled out, while the one obtained without g − 2 remains viable. Recall that the non-Gaussian likelihoods are normalised in such a way that −2 ln L = 0 when there are no constraints for that observable.
precision observables, most importantly g − 2, ∆ 0− and A F B (B → K * µ + µ − ). The experimentally measured values of these quantities are in disagreement with the SM predictions at 1 − 3σ level, and, upon relaxing the relic density constraint, play a dominant role in driving the profile likelihood results. In particular, low neutralino and chargino masses can lead to values of ∆ 0− and A F B (B → K * µ + µ − ) that are in reasonably good agreement with the observations, while at larger values of mχ± 1 and mχ0 1 these quantities approach their SM-like values, which are discrepant with the experimental constraints.

Best-fit points
In table 3 we show the coordinates of the best-fit points, as well as the best-fit values of several of the observables, for each of our three analyses. For the "All data" and "Planck   Table 3. Best-fit values of the MSSM-15 input parameters and several observables of interest. For the cases "All data" and "Planck upper limit", we show both the overall pre-LHC best-fits in the second and fifth column (those points are ruled out by ATLAS data) and the best-fitting point surviving the addition of ATLAS constraints and Higgs boson properties data (third and sixth column). The bottom section gives the corresponding χ 2 values. Notice that the "Pre-LHC" data do include the Higgs mass measurement.

JHEP09(2014)081
upper limit" cases, the pre-LHC best-fit values become ruled out once the ATLAS null SUSY searches are added to the likelihood (see section 3.6), as a consequence of their low squark masses, which are excluded by the 0-lepton search. For those two cases, we also show the coordinates of the best-fitting points that survive the inclusion of ATLAS data at the post-processing stage. We do not provide an interpretation of the best-fit χ 2 value in terms of goodness of fit. This is because our likelihood function receives contributions from experimental limits that are not Gaussian distributed, hence asymptotic distributions for the ensuing χ 2 that assume Gaussian data do not apply. The determination of the quantitative goodness of fit of our best-fit points would require detailed Monte Carlo realisations of the data sets.
In figure 6 we display the contribution of each observable to the best-fit χ 2 , for the analysis including all data (red), excluding the g − 2 constraint (purple) and including the Ω χ h 2 measurement as an upper limit (blue). In general, the largest contributions to the best-fit χ 2 result from the same observables for each of the three analyses, namely σ 0 had , BR(B u → τ ν)/BR(B u → τ ν) SM , BR(D s → τ ν) and, to a lesser extent, R 0 l (as already discussed in section 3.1). Another large contribution to the best-fit χ 2 for the analyses including the g − 2 constraint results from the isospin asymmetry ∆ 0− . In contrast, for the analysis excluding the g − 2 constraint, the experimentally measured value of ∆ 0− can be reproduced (see the discussion in section 3.1), leading to a much smaller χ 2 contribution. Largely as a consequence of this difference, the overall χ 2 achieved by the analysis excluding g − 2 is slightly reduced compared to the other two analyses.
Upon post-processing with the LHC data sets, the χ 2 values of the pre-LHC bestfit points become 1054.32 (all data), 9.44 (without g − 2) and 267.52 (Planck as upper limit). Thus the overall pre-LHC best-fit point becomes ruled out, while the best-fit point obtained from the scans excluding g − 2 remains viable. On one hand, this is a consequence of the larger best-fit values of the gluino (2.83 TeV) and squark (1.55 TeV) mass for this case, which are the main quantities constrained by the ATLAS 0-lepton search. On the other hand, even though the production cross-section of the lightest chargino and the second lightest neutralino is of O(1 pb), their branching ratios to leptons are only of a few percent, leading to a signal prediction for all the signal regions of the ATLAS 3-lepton search analysis compatible with the data at the 1σ level. The characteristics of the best-fit points surviving the inclusion of the LHC data sets are discussed in section 3.6 below.

Implications for direct detection
Within the MSSM the dominant contribution to the spin-independent (SI) cross-section amplitude is generally the exchange of the two neutral Higgs bosons, although in some cases the contributions of the squark exchange and loop corrections are substantial. When m H < m h √ tan β, the heavy Higgs is usually the dominant one. As we have seen in section 3.1, values of m A 1 TeV are disfavoured, which in turn implies that values of m H 1 TeV are preferred. Thus we expect that the light Higgs exchange dominates. The SI cross-section for H/h exchange in the SUSY decoupling limit with moderate to large tan β values is ∝ |(N 12 −N 11 tan θ w )| 2 |N 13/14 | 2 /m 4 H/h f 2 q , where θ w is the electroweak mixing angle, N 1i represent the neutralino composition and f q are the quark-nucleon matrix elements. Therefore, -26 -JHEP09(2014)081 a larger SI cross-section is expected in the "well-tempered" neutralino scenario, i.e. when the neutralino is a bino-higgsino, wino-higgsino or bino-wino-higgsino mixture. In fact, a sizeable SI cross-section is obtained as long as the higgsino fraction is larger than O(0.1).
For the squark exchange, at tree level only the exchange of the u, d and s squarks contributes, though one can still consider heavy quarks in the effective field theory approach provided that mq ≫ (m χ 0 + m q ) . Otherwise, a one-loop treatment has to be considered to account for them [102]. As the expression for the amplitude is lengthly we do not write it here explicitly (for details see for instance [102]). It consists of two parts, one coming from gaugino-higgsino mixing and a second one proportional to sin 2θq, in which pure binos or mixed bino-winos are involved, where θq is the squarks mixing angle. Of course, both are proportional to the propagator 1/(m 2 q − (m χ 0 + m q ) 2 ). After this brief review of the anatomy of the contributions involved at tree-level in the SI cross section, we turn to the discussion of figure 7, which shows the two-dimensional profile likelihood functions in the planes of neutralino mass vs. the cross section for spin-independent neutralino-proton (left panels), spin-dependent neutralino-proton (central panels) and spin-dependent neutralino-neutron (right panels) scattering. From top to bottom, the panels show the results for the analysis including all data, excluding the g − 2 constraint, and using the Planck relic density measurement as an upper limit. In each panel, the 68%, 95% and 99% confidence regions are shown. In the top and central left-hand panels we also show the current 90% exclusion limits from the XENON100 experiment (red) and the LUX collaboration (blue, not included in the analysis). As described in section 2.5, when applying the Planck constraint on the relic density as an upper limit, the local dark matter density is rescaled with the scaling Ansatz of eq. (2.5). Therefore, the XENON100 and LUX exclusion limits, that were computed for a fixed local density We start by discussing the 2D profile likelihood results for spin-independent neutralinoproton scattering (left-hand panels). Multiple modes of high likelihood can be identified. For each of the three analyses we observe a narrow area at mχ0 1 ∼ 50 GeV spanning almost 15 orders of magnitude in σ SĨ χ 0 1 −p that is favoured at 68% C.L. A second region that is strongly favoured is found at WIMP masses of several hundred GeV, and stretches from cross-sections just below the XENON100 limit down to σ SĨ χ 0 1 −p ∼ 10 −20 pb. Additionally, the 95% region for the analysis excluding the g − 2 constraint also includes a sizeable region at larger neutralino masses 1 TeV mχ0 In the (σ SĨ χ 0 1 −p , m 0 χ ) plane, the 68% C.L. region corresponds to a bino-like neutralino LSP. Since the lightest Higgs mass is fixed to ∼ 126 GeV by the LHC measurement, and m A ≫ m Z , one would expect σ SĨ χ 0 1 −p to be O(10 −9 pb), which is realised at the top of the narrow, vertical strip found at a neutralino mass ∼ 50 GeV. Heavier bino-like neutralinos can acquire some mixing with higgsinos, which further enhances the SI cross-section. The degree of higgsino mixing is limited by the XENON100 constraint. On the other hand, cancellations among the different contributions might occur [103], leading to values as low as ∼ 10 −25 pb within the 95 % C.L. Such cancellations require unexpected relationships between the parameters in the Higgs and squark sectors, the parameters determining the neutralino composition and the nuclear matrix elements -something that in constrained SUSY models is quite unlikely to happen. We also observe a region favoured at the 95% C.L. that corresponds to ∼ 1 TeV higgsino-like neutralinos. This region is disfavoured relative to the lowmass regions because a heavy higgsino-like neutralino forces the EWKinos and sfermions to be heavy, which is in tension with the constraints on several observables, namely the muon g − 2 constraint, ∆ 0− and A F B (B → K * µ + µ − ) (see the discussion in section 3.1).
Note that, for higgsino-like dark matter, the neutralino mass is strongly constrained to mχ0 1 ∼ 1 TeV by the Planck constraint on the dark matter relic abundance. However, co-annihilations with the second lightest neutralino and the lightest chargino can further reduce the dark matter relic abundance, so that higgsino-like dark matter with mχ0 Excluding the muon g − 2 data leads to a sizeable difference in the neutralino mass favoured at the 95% C.L. First, in the higgsino-like neutralino region (m 0 χ ∼ 1 TeV), σ SĨ χ 0 1 −p extends to significantly lower values, both because the neutralino becomes an increasingly pure Higgsino state, and due to cancellations between the Higgs sector contributions to σ SĨ χ 0 1 −p . Secondly, a region at large neutralino masses mχ0 1 ∼ 2 − 3 TeV is now favoured at 95% level. In this region, which was previously disfavoured by the g − 2 constraint, the neutralino is wino-like (see also figure 9 below).
The bottom-left panel depicts the SI cross-section for multi-component DM scenarios. Now larger cross-sections are not penalized because the scaling factor ξ reduces the predicted number of recoil events, thus weakening the impact of the XENON100 constraint. As mentioned above, the lowest neutralino masses correspond to bino-like neutralinos. Heavier neutralinos can be both well-tempered and almost pure wino-like or higgsino-like. Neutralinos with an admixture of wino and/or higgsino annihilate very efficiently via coannihilations, providing a relic abundance well below the Planck upper limit. This effect is largest for light neutralinos. Compared to the analysis assuming that neutralinos make up all dark matter in the universe, the contours are shifted towards smaller neutralino masses. As explained above, this is mainly a consequence of the flavour observables ∆ 0− , A F B (B → K * µ + µ − ), and the g − 2 constraint, which play a dominant role in driving the profile likelihood results when relaxing the Planck constraint on the neutralino relic density. 7 Similar patterns can be observed for spin-dependent neutralino-proton and neutralinoneutron scattering (central and right-hand panels). The narrow region at low mχ0 1 is clearly visible in the mχ0 1 −σ SD χ 0 1 −p plane for all three analyses, and also shows up in the mχ0 In principle, the 95% contours might also include the regions at high neutralino masses favoured at 95% level in the analysis requiring that Ωχ ∼ ΩDM. However, these points correspond to strong fine-tuning of the parameters in order to obtain an acceptable fit to observables such as ∆0−, AF B (B → K * µ + µ − ), and g − 2. In the absence of the relic density constraint, which drives the scan towards these regions, the scan spends less time tuning the observables in this region. A dedicated investigation of the profile likelihood coverage in this parameter space is the subject of future work.  From top to bottom: including all data, excluding the g − 2 constraint, and applying the Ω χ h 2 measurement as an upper limit. The encircled cross gives the location of the best fit. Recall that these analyses does not include null SUSY searches at the LHC (see section 3.6).

JHEP09(2014)081
plane for the analysis excluding the g − 2 constraint. Likewise, the extended region at dark matter masses mχ0 1 ∼ O(100) GeV can easily be identified in both planes, and spans a large range in σ SD χ 0 1 −p and σ SD χ 0 1 −n for both the analyses including all data and the analysis allowing for a relic density smaller than the value measured by Planck. Instead, for the analysis excluding the g − 2 constraint, this region only inhabits an area at large spin-dependent cross-sections, and is disfavoured at 99% C.L. even at intermediate cross-section values σ SD χ 0 1 −p ∼ 10 −11 pb and σ SD χ 0 1 −n ∼ 10 −9 pb. Regions observed at large mχ0 1 > 1 TeV in the mχ0 1 − σ SĨ χ 0 1 −p plane for the analyses including all data and excluding g − 2 also appear for spin-dependent scattering, although the two different high-mass regions observed in the spin-independent plane for the latter analysis are difficult to identify as separate regions.
In general, the dominant contribution to the spin-dependent (SD) cross-section is the Z exchange contribution. Since the bino and wino are both SU(2) singlets, they do not couple to the Z boson, so that the SD cross-section is largely determined by the higgsino content of the neutralino. The Z exchange contribution (and hence the SD cross-section) is proportional to the higgsino asymmetry (|N 13 | 2 − |N 14 | 2 ) 2 . The squark exchange contributions has a similar structure to the SI case.
In the top central (right-most) panel the (σ SD For the spin-dependent neutralino-proton interaction, the shape of the PL contours is similar to the results for the SI cross-section. This can be understood by the fact that the squark-exchange contribution follows a similar pattern and the Z exchange contribution is non-negligible as long as the neutralino is well-tempered. In the absence of degeneracies between parameters in the neutralino mass matrix and if m Z can be treated as a perturbation, the asymmetry |N 13 | 2 − |N 14 | 2 ∝ cos 2β/(µ 2 − M i ) for |M 1 |, |µ|, |µ| − |M i | > m Z and M i → ∞, with i = 1, 2. From this one recovers the limits of pure gaugino/Higgino in which the higgsino asymmetry vanishes. The asymmetry is maximized when either the binos and higgsinos or winos and higgsinos are close in mass, i.e. for well-tempered neutralinos. One expects a suppression when the Z and squark exchange contributions cancel against one another, which requires fine-tuned relationships between the model parameters and the nuclear matrix elements. This is typically not the case for the scattering off both protons and neutrons simultaneously, which explains the differences between the results for the proton and neutron SD scattering cross-section.
In the middle central and right-most panel we display the "w/o g-2" case. The most remarkable difference with respect to the "All data" case (upper panels) occurs for neutralino masses O(100 GeV) where the SD cross-section is on average larger. The effect is more pronounced for masses ∼ 100 GeV, where the neutralino, although bino-like, acquires a sizeable higgsino fraction, as required to fulfil the Planck measurement of the dark matter relic density. Recall that in the "All data" case, in addition to acquiring a sizeable higgsino content, bino-like neutralinos may annihilate through both the exchange of light sleptons and through co-annihilations, while keeping their pure bino character. This suppresses the higgsino asymmetry factor and thus the SD cross-section. Additonally, at neutralino masses of a few hundred GeV, one can find the bulk region with relatively light sbottoms/stops, as outlined in section 3.1. As a result, the SD cross-section covers a large range of val--30 -

JHEP09(2014)081
ues. For masses ∼ 1 TeV, the neutralino is higgsino-dominated and can exhibit either an enhancement or a suppression in the SD cross-section, depending on its purity degree.
Finally the bottom central and right-most panels show the multi-component DM scenario. Here, the main difference with respect to the upper panels is that in this case the neutralino is mostly either wino-like or a wino-higgsino mixed state. A wino-like neutralino has an enhancement with respect to the bino-like state due to the larger SU(2) gauge coupling relative to the U(1) one. In this case the PL contours for the proton and neutron SD cross-sections are almost identical.
In figure 8 we show the 2D profile likelihood in the plane of spin-independent neutralino-proton cross-section vs. the neutralino relic density. As mentioned above, we assume that the local neutralino density scales with the cosmological abundance. As a result, the XENON100 limit is shifted towards larger cross-section values for points in parameter space that lead to a relic density smaller than the value measured by Planck. This translates into a negative correlation visible in figure 8 for large values of the scattering cross-section. For small values of Ωχ0 1 h 2 the most favoured region of parameter space is a narrow band stretching along the currently largest allowed cross-section values, within reach of future direct detection searches. In this region, the neutralino is a mixed winohiggsino state (see section 3.5 below), so that co-annihilation effects are maximized, and very small neutralino relic densities can be achieved.
For 10 −5 Ωχ0 1 h 2 10 −4 a second region, favoured at 95% C.L., shows up at slightly lower cross-section values σ SĨ χ 0 1 −p ∼ 10 −10 pb. This is a consequence of light pure-gaugino neutralinos with masses of O(100 GeV) still annihilating efficiently, but leading to a suppressed SI cross-section with respect to the wino-higgsino case. For higgsino-(wino-)like neutralinos, annihilation remains efficient up to m 0 χ 1(2) TeV. This leads to a low relic abundance, a scenario that corresponds to the region 10 −4 Ωχ0 1 h 2 10 −1 . Finally when the neutralinos are either bino-like (with masses from ∼ 50 GeV to a few hundred GeV), higgsino-like (with mχ0 1 ∼ 1 TeV), or wino-like (with mχ0 1 ∼ 2 TeV) the relic density matches the Planck constraint. In this cases the SI cross-section reaches lower values because of the great purity of the neutralino. In figure 8 the largest values of Ωχ0 1 h 2 correspond to bino-like neutralinos, due to the preference for relatively small mχ0 1 in the analysis allowing for multi-component dark matter (cf. figure 7). The cutoff at large Ωχ0 1 h 2 is due to the Planck upper limit on the relic density.

Dark matter composition
We now discuss the neutralino compositions favoured in different regions of the MSSM-15 parameter space. The neutralino composition in the plane of neutralino mass vs. spinindependent cross-section is shown in figure 9, for the analysis including all data (left panel) and when using the Planck relic density constraint as an upper limit (right panel). The neutralino composition for the analysis excluding the g − 2 constraint is not shown, as it is qualitatively very similar to the "All data" case.
We define the neutralino to be bino-like if it has a bino fraction b f > 80%, wino-like for a wino fraction w f > 80% and higgsino-like for a higgsino fraction h f > 80%. A mixed (B,W) neutralino has both a sizeable bino and wino fraction (b f , w f > 20%), and similarly for mixed (B,H) and mixed (W,H) neutralinos. Neutralinos that do not fit into any of the above categories are considered mixed (B,W,H) states. For reference, we also show the 68%, 95% and 99% 2D PL contours in this plane (black, empty), as well as the best-fit points (circled black crosses).
For the analysis including all data (left-hand panel) three dominant dark matter compositions can be identified. At low masses, mχ0 1 600 GeV, the neutralino is bino-like. Pure bino dark matter tends to lead to relic densities that overclose the universe. However, for -32 -low and intermediate and neutralino masses, pole-resonances with Z/h or co-annihilation effects with light sleptons reduce the relic density sufficiently to achieve Ωχ0 1 h 2 ∼ O(0.1). Additionally, in this mass range the neutralino can acquire a non-negligible higgsino fraction, leading to a relic density in agreement with the values measured by Planck. In the mass range 0.8 TeV mχ0 1 1.6 TeV the neutralino LSP is higgsino-like. For pure higgsino dark matter, annihilation in the early universe is very efficient, so that small values of µ ∼ O(100) GeV lead to relic densities much smaller than the Planck constraint. However, for large values µ ∼ 1 TeV (and thus mχ0 1 ∼ 1 TeV) the correct dark matter density can be achieved, so that higgsino-like dark matter is favoured at neutralino masses ∼ 1 TeV. At very large masses mχ0 1 1.6 TeV the neutralino becomes predominantly wino-like. Winolike dark matter annihilates even more efficiently than higgsino-like states, so that very large wino masses M 2 2 TeV are required to reproduce the Planck measurement of the dark matter density. As a result, wino-like dark matter is favoured at mχ0 1 2 TeV. Finally, we observe a small region of bino-like neutralinos at mχ0 1 ∼ 3 TeV. In this region, the correct relic density is achieved via gluino co-annihilations, a phenomenological feature appearing in models without gaugino mass unification, as pointed out in [104]. Small islands of mixed (B, H) dark matter show up in the transition region from bino-like to higgsino-like neutralinos. Additional small islands of mixed (W, H) neutralinos can be found at large masses and large spin-independent cross-sections. Mixed (B, W ) and (B, W, H) states are rare.
The neutralino composition when the Planck relic density constraint is applied as an upper limit (right-hand panel) is largely driven by the SM precision observables, namely ∆ 0− , A F B (B → K * µ + µ − ), and the g − 2. The bulk of the parameter space correspond to wino-like neutralinos, with the exception of a narrow area at very low masses mχ0 1 ∼ 100 GeV and a narrow diagonal area at the lowest allowed cross-sections (as a function of mχ0 1 ) that correspond to bino-like dark matter. Higgsino-like neutralinos are now disfavoured, and only show up as isolated islands in different regions of parameter space. A second interesting feature is a pronounced region of mixed (W, H) neutralinos that is found at large spin-independent cross-sections and stretches along almost the entire allowed neutralino mass range. Other mixed states ((B, H),(B, W ),(B, W, H)) are rare. Note that, for low mass neutralinos, a large range of different neutralino compositions are possible.

Impact of LHC Higgs properties and ATLAS SUSY searches
We now turn to the discussion of the impact of ATLAS null searches for SUSY and CMS measurements of the Higgs properties on the favoured regions of the MSSM-15 parameter space.
The evaluation of the full LHC likelihood described in the appendix is numerically very demanding. We estimate that post-processing of all the samples gathered for the above analysis would require approximately 400 CPU-years. This considerable task is the subject of a dedicated work [91]. For the more limited purpose of this paper, we adopt an intermediate approach, which gives an indication of the extra constraining power from LHC SUSY searches and Higgs signal strengths measurements. In what we call the "mini-chains" approach, we first produce profile likelihood maps from our full chains for several 2D planes of interest. Given typical binning sizes, this leads to approximately 10 4

JHEP09(2014)081
profile likelihood values for each 2D plane. For each of those values, we then compute the combined χ 2 contribution from LHC constraints on the Higgs production cross-sections and LHC SUSY searches (0-lepton and 3-lepton), according to the procedure described and validated in the appendix. We add the extra χ 2 value to the pre-LHC χ 2 obtained from all other experimental data sets.
We stress that this is not a fully consistent statistical approach, and that the ensuing maps cannot be interpreted probabilistically as PL maps (as the full likelihood has not been maximised out in the dimensions not shown). However, it does allow to draw some useful conclusions regarding the impact of LHC SUSY searches and measurements of the Higgs properties: mini-chain points that remain viable after inclusion of the LHC constraints would not be ruled out even under a full PL approach. In this sense, our approach gives an indication of the maximal possible constraints (in the plane under consideration) resulting from the included LHC data sets. Furthermore, this procedure allows us to investigate whether the best-fit points found in the above global fits analysis remain viable in the light of the LHC constraints.
In figure 10 and figure 11 we show the impact of the ATLAS null searches for SUSY in the 0-lepton and 3-lepton channels, and of the CMS measurements of the Higgs boson properties. Bins that are almost unaffected by the LHC constraints (impact < 1σ) are shown in cyan, bins that are disfavoured with a significance > 1σ and < 4σ level are shown in pink, and bins that are ruled out by the LHC (impact > 4σ) are displayed in grey. Note that we only show bins that were included in the 99% C.L. region before post-processing the mini-chains with the LHC constraints. Figure 10 shows the LHC impact for the analysis including all data (top row) and the analysis excluding the g − 2 constraint (bottom row). Results for the "Planck upper limit" analysis (not shown) are qualitatively very similar to the "All data" case. From left to right the plots show the LHC impact in the planes of gluino mass vs. average squark mass, lightest chargino mass vs. lightest neutralino mass and neutralino mass vs. SI cross-section. As can be seen in the left-hand panels, the LHC 0-lepton search has a strong impact on the favoured regions of the MSSM-15, both for the analysis including and excluding the g−2 constraint, ruling out gluino and squark masses m gluino , m squark 1 TeV. In addition, the measurements of the Higgs production cross-sections have a strong effect. In particular, in the regions most strongly affected by the Higgs signal strengths data we observe a suppression of the bb signal strength (and, to a lesser extent, the τ + τ − signal strength). As a consequence of this suppression, the other signal strengths are enhanced, in conflict with the experimental measurements. In particular, the constraint on µ W + W − leads to a significant contribution to the total χ 2 , as the central value is below the SM prediction at ∼ 1σ level, and the experimental error on this quantity is relatively small.
At tree-level, one would expect that the Higgs couplings are approximately SM-like, as m A 1 TeV for all points considered. However this argument breaks down when considering higher-order corrections. In fact, ref. [105] shows how SUSY QCD (SQCD) corrections to the hbb coupling can still be large in this limit, provided one or both of the sbottoms lie below the TeV scale (we have verified that this is the case for the regions most -34 - strongly affected by the Higgs couplings data). In this case, sizeable deviations from the SM prediction may arise.
The effect of full decoupling can be seen in the narrow vertical band of cyan bins with m gluino ∼ 5 TeV, where all SUSY masses are large. For the analysis including all data, on the right-hand side of this band there is a narrow strip in which the full decoupling is not fulfilled. This particular region corresponds to very large values tan β ∼ 50, for which the onset of decoupling is delayed [105]. As a result, even though the gluino is heavy, the approach to decoupling is significantly slower. In general, the pink bins correspond to relatively larger values of tan β than the cyan bins, for which full decoupling is not achieved.
Note that, for the analysis including all experimental constraints, there is a region at relatively low values of m squark and m gluino (but above the ATLAS 0-lepton limits), which is significantly disfavoured by the LHC constraints. In this region, tan β∼ 10, so that the Higgs couplings data have a smaller impact. Instead, the ATLAS 3-lepton search (see below) impacts quite strongly on this region. The above discussion applies broadly also to the central and right-hand panels (for both the analysis including all data and excluding -35 -JHEP09(2014)081 the g − 2 constraint), in which the impact of the Higgs production cross-sections data follows a similar pattern.
In general, the impact of the 3-lepton channel search, which imposes constraints in the lightest chargino mass vs. lightest neutralino mass plane (central panels) is relatively weak compared to the 0-lepton channel. This is true in particular for the analysis excluding the muon g − 2 constraint, for which larger neutralino masses are favoured (cf. figure 4 above). The impact of the constraint on the Higgs production cross-sections is again clearly visible, significantly disfavouring points that lead to strong deviations from the SM prediction for a large range of different values of mχ0 1 and mχ± The impact of the LHC SUSY and Higgs searches in the plane of neutralino mass vs. spin-independent scattering cross-section is shown in the right-hand panels of figure 10. The main impact of the LHC in this plane is to rule out points at low/intermediate neutralino masses that were previously strongly favoured, mainly as a consequence of the 0-lepton channel search. Therefore, for small mχ0 1 300 GeV, the LHC is extremely powerful, ruling out cross-sections orders of magnitudes below the reach of present and future direct detection experiments (and indeed below the "ultimate" limit represented by the solar neutrino background). For the analysis excluding the g − 2 constraint, a much smaller fraction of points is affected by the LHC, and several points at small mχ0 1 are still allowed. This is largely a result of the 0-lepton search having less of an impact on the analysis excluding g − 2 (as very small squark masses are disfavoured for this analysis). Note that for mχ0 1 500 GeV the MSSM-15 parameter space is largely unaffected by constraints from LHC SUSY searches, but can be constrained by precise measurements of the Higgs production cross-sections.
In figure 11 we show the impact of the LHC in the plane of spin-independent crosssection vs. neutralino relic density for the case when the Planck relic density measurement is taken as an upper limit (i.e., multi-component dark matter scenarios are allowed; the relic density is connected to the local density via the scaling Ansatz in eq. (2.5)). The LHC has a strong impact in this plane, ruling out a large range of different relic densities and spin-independent cross-sections. Most of these points correspond to squark masses of O(100 GeV), and are thus ruled out at high significance by the ATLAS 0-lepton search. The pre-LHC best-fit point from the "w/o g -2" analysis remains viable in light of the LHC data, while the best-fit points for the "All data" and "Planck upper limit" analyses are strongly disfavoured (see section 3.3). The best-fit points in the mini-chains after inclusion of the LHC constraints for the "All data" and "Planck upper limit" cases are given in table 3. Pre-LHC, those points have a χ 2 value within 1σ of the overall best-fit, and thus are perfectly viable. After adding the contributions from LHC SUSY null searches and constraints on the Higgs properties, their χ 2 increases by 0.81 ("All data") and by 0.76 ("Planck upper limit"). respectively. This indicates that they remain in good agreement with all experimental data sets considered in this analysis. Compared to the pre-LHC best-fit points, we observed a shift of the squark mass to the multi-TeV region (2.3 TeV and 5.9 TeV, respectively), a slight increase in the neutralino mass (134 GeV and 128 GeV, respectively) and a gluino mass in the 1-2 TeV region. A squark mass of 2.3 TeV with gluinos in the 1-2 TeV range will be accessible to the LHC searches in the upcoming high energy runs [106]. For the "All data" case, the best-fit SI cross-section shifts to a value of 2.3 × 10 −10 pb, which is within the reach of the next generation of multi-ton scale direct detection experiments.
Notice that in some panels in figures 10-11, the post-LHC best-fit points appear to be located in bins that are excluded according to the colour coding (grey). This is a consequence of the limitations of the mini-chains approach adopted here: as explained above, the mini-chain results have been obtained by post-processing the 2D PL values in each of the 2D planes separately. The post-LHC best-fit points, on the other hand, have been selected from the joint mini-chains encompassing PL values from all 4 2D planes shown in figure 10-11 (in particular, in both cases the post-LHC best-fit point was taken from the m squark −m gluino mini-chain). Therefore, the post-LHC best-fit points may appear to be excluded in some 2D projections since the latter have not been maximised over the entire parameter space with LHC data.

JHEP09(2014)081
The above results are suggestive of the constraints that a full analysis of our entire MSSM-15 sample with our LHC likelihood would provide. As mentioned above, results from the mini-chains approach are indicative, but should not be interpreted probabilistically as PL maps. A thorough profile likelihood analysis of the LHC impact on the MSSM-15 will be presented in a future work [91].

Conclusions
We have presented global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters, including all available accelerator constraints, as well as constraints from cosmology and direct detection experiments. We have obtained highresolution profile likelihood maps of the model parameter space, and discussed implications for the collider phenomenology and detection prospects in astro-particle physics experiments. We have discussed and compared the results for both the case in which the neutralino LSP is the only component of the dark matter in the universe, and the case in which it may be a subdominant dark matter component. We have also provided a detailed assessment of the impact of the g − 2 constraint on the MSSM-15 profile likelihood maps. We summarise here the main results of our work: • Constraints on input parameters. Most of the input parameters remain almost unconstrained by current experimental results. However, relatively stringent constraints are placed on the parameters related to the dark matter phenomenology, M 1 , M 2 and µ, which are significantly affected by the relic density constraint, direct detection data, and several of the flavour observables, leading to a preference for small values of these quantities.
• Constraints on SUSY mass spectrum. In all considered cases, the profile likelihood function for the mass of the neutralino LSP peaks at very small values mχ0 1 100 GeV. For single-component dark matter scenarios, a bino-like neutralino LSP with a mass of ∼ 60 GeV is strongly favoured, although higgsino-like dark matter with mχ0 1 ∼ 1 TeV is allowed at lower confidence. For the case excluding the g − 2 constraint, the profile likelihood for the neutralino mass extends to significantly larger values, pushing the maximum value from 1.5 TeV to about 3 TeV. In this case, winolike dark matter with mχ0 1 ∼ 2 TeV is favoured at 95% level. The profile likelihood functions for the squark and gluino masses are almost flat within the investigated parameter ranges.
• Direct dark matter searches. Direct detection constraints are found to be complementary to accelerator searches. Whereas upcoming experiments will allow to probe high neutralino scattering cross-sections, the very long tails in the parameter space extending to extraordinarily small cross-section values further strengthen the case for a combined analysis of astro-particle and accelerator data. Our current bestfit point, however, is within reach of the next generation of multi-ton scale direct detection experiments, exhibiting a spin-independent cross-section of 2.3 × 10 −10 pb.

JHEP09(2014)081
• Neutralino composition. The rich phenomenology of the MSSM-15 manifests itself in a broad range of neutralino compositions. We have provided a detailed discussion of the phenomenological consequences of the different compositions, and noticed in particular that in the case where the relic density constraint is applied as an upper limit, the favoured neutralino compositions are substantially different from the other cases, with the bulk of the favoured parameter space corresponding to wino-like (instead of bino-like) states.
• Impact of LHC searches. We have demonstrated the strong impact of LHC SUSY searches, which provide stringent constraints in regions of the parameter space corresponding to very low values of σ SĨ χ 0 1 −p , which are not accessible with astro-particle physics experiments in the foreseeable future. Furthermore, we highlighted the significant impact of constraints on the Higgs signal strengths on the MSSM-15.
The full implementation of the LHC likelihood described in the appendix is numerically very demanding: post-processing of all samples gathered for the above analysis would require approximately 400 CPU-years, even with our approximate likelihood based on fast simulations. We have adopted here an intermediate approach, which gives an indication of the extra constraining power from LHC searches and Higgs properties on the 2D profile likelihood maps. We will provide profile likelihood maps including the full LHC constraints in an upcoming dedicated work [91].

A ATLAS likelihood
In this section we describe the construction adopted for the ATLAS likelihood. We adopt an approximate construction to exploit 0-lepton and 3-lepton inclusive searches from ATLAS data, as explained below.

A.1 ATLAS 0-lepton and 3-lepton signal regions
The ATLAS 0-lepton analysis [79] has 6 channels which are used to construct between one and three signal regions with "tight", "medium" and/or "loose" m eff (incl.) selections, giving in total 11 signal regions. The different channels have been constructed for different SUSY particle production mechanisms. Signal region A is designed for squark-squark production, signal region A' especially for models with low mass splittings. Signal region B is designed for squark-gluino production whereas signal regions C-E are constructed for gluino-gluino production with high jet multiplicities. The selection criteria for each signal region are shown in table 4. As the name implies there is a general veto on events containing leptons. The used selection variables are the minimum required number of jets and their respective transverse momentum, the missing transverse energy E miss T , the effective mass m eff calculated as the scalar sum of all transverse jet momenta larger than 40 GeV and the missing transverse energy, the ratio of E miss T to m eff (where m eff only includes the required number of jets), the minimum angle between the required jets and the missing energy vector ∆φ(jet i , E miss T ) min . For the signal region C-E an additional criterium is applied, a cut on ∆φ(jet i , E miss T ) min for all jets with transverse momenta larger than 40 GeV.   Table 5. Requirements for the signal regions 1a, 1b and 2 for the 3-lepton ATLAS analysis with an integrated luminosity of 4.7 fb −1 . In addition, the number of reconstructed leptons has to be three (from [80]).
The ATLAS 3-lepton analysis [80] consists of 3 signal regions. Signal regions 1a and 1b include a Z-veto, signal region 2 is designed for a on-shell Z boson. All signal regions require exactly three leptons, two of them form the same flavour opposite sign (SFOS) lepton pair. The selection criteria are shown in table 5. The transverse mass m T is calculated using the missing transverse energy and the third lepton.
Altogether, we thus have a total of 14 signal regions (11 from the 0-lepton analysis and 3 from the 3-lepton analysis).

A.2 The likelihood function
The likelihood for each bin in a signal region i (i = 1, . . . , 14) is given by where the first factor reflects the Poisson probability of observing a number of events n in the signal region given the signal (background) expected value s (b). The Poisson expectation value λ s also depends on the nuisance parameters θ that parameterize systematic uncertainties, such as luminosity or jet energy scale. Those uncertainties are constrained via the likelihood term L C (θ), which is taken to be a multivariate Gaussian distribution around the nominal value θ = 0, with diagonal covariance matrix entries given by the quoted nominal uncertainties in each of the systematic factors. Then we write the Poisson expectation value as where s and b are the nominal values of the signal and background, ∆ s and ∆ b are their relative uncertainties and θ s and θ b are nuisance parameters, so that θ = {θ s , θ b }. Experimental analyses provide the overall uncertainty in the background expectation in the signal region, ∆ b . For the systematic uncertainty on s we can use the fact that

JHEP09(2014)081
where L is the integrated luminosity, σ is the SUSY cross-section and ǫ is the acceptance times the detector efficiency. The errors on each of the terms above can then be propagated linearly to obtain The theoretical uncertainties involved in the SUSY cross-sections determination, ∆σ, are computed at each point of the parameter space of the model under consideration for the 0-lepton analysis via the NLL-fast 1.2 package [84][85][86][87], while they are neglected for the 3-lepton analysis. The relative error in the efficiency can be determined by comparing the official efficiencies maps from the ATLAS collaboration with ours (see below). The value of ∆L is subdominant compared with the other uncertainties, and hence can be neglected. We further neglect uncertainties that are subdominant compared to the ones affecting the efficiencies, such as the jet energy scale.
We then obtain an effective likelihood, L eff,i , by eliminating the above nuisance parameters θ via marginalisation as follows: where the prior over θ is uniform around θ = 0 and of length 6 standard deviations on either side.

A.3 Approximate joint likelihood for inclusive searches
The method for combining different SUSY analyses depends on whether the analyses are exclusive (i.e., without overlapping data samples), or inclusive (i.e., with overlapping data samples). For exclusive analyses the corresponding data samples are statistically independent, whether they are signal regions or control samples to constrain the background prediction. However, the combined likelihood of two exclusive analyses cannot be constructed as the simple product of the two individual likelihoods, as the systematics term L C is in general correlated between the two searches. Every (fully) correlated systematic uncertainty must use the same nuisance parameter in both analyses, and only one constraint on this single parameter should be used in L C .
For analyses with statistically overlapping data samples or signal regions that are not exclusive (i.e., "inclusive" analyses), the likelihoods for different signal regions are not statistically independent, hence a joint likelihood is difficult to construct. In this case, for each value of s we want to test, we select the best signal region based on the median (expected) value of the likelihood P (q s |s + b), where q s denotes the test statistics (as appropriate for setting upper limits) In the above test statistics, we have defined the profile likelihood ratio whereθ denotes the conditional ML estimator for the nuisance parameters, θ,ŝ is the unconditional MLE for s andθ the unconditional MLE for θ. The distribution of the likelihood is obtained from Monte Carlo simulations (assuming the alternative s + b). The best signal region is the one leading to a median likelihood with the smallest p-value. This procedure thus selects the signal region that is expected to give the strongest upper limits for each value of s. We then evaluate the likelihood using the observed number of events in that optimal signal region, P (n obs |s + b fit ), where n obs is the observed number of events and 'fit' refers to the data-constrained background value.
This approach however would in general lead to a discontinuity in the value of the likelihood whenever one crosses regions in parameter space where the best signal region changes. This is because there is no reason why absolute values of the likelihood function for different signal regions should be continuous across optimal signal regions boundaries. We solve this problem by defining the full likelihood as where L obs i is the observed likelihood for the signal region selected by the above procedure, while E[L j ] is the expected likelihood in signal region j = i (i.e., in all the other signal regions that are less optimal for the given s being tested).

B.1 Likelihood validation
We validate the likelihood (A.1) and the approach of eq. (A.8) in the case of non-overlapping signal regions as follows. For every signal region i (i = 1, . . . , 14) in the analyses used, the number of expected events in the signal region under the null hypothesis (s = 0) is given by the number of expected background events b i ± σ b i . In order to not bias ourselves towards any particular SUSY model or particular number of expected signal events s i in general, we generate 10,000 toy events around the background expectation only.
To take into account systematic and statistical fluctuations, the number of toy observed events is generated according to where the extra Gaussian smearing approximately accounts for systematic effects. For σ b i we adopt the uncertainty on the background prediction as given by ATLAS. We then compute the joint likelihood of eq. (A.8) for the simulated events for each non-exclusive signal region i.  . Log likelihood results for toy data in each signal region in the ATLAS 0-lepton inclusive analysis. All curves can be seen to follow the same distribution, which validates our approach.

JHEP09(2014)081
The results of the validation studies are shown in figure 12, for the 11 signal regions of the 0-lepton analysis (the case of the 3-lepton analysis is similar). We can see that the distribution's shapes are identical, up to an irrelevant normalisation factor. The spikes are a normalisation issue, due to the fact that for regions with a small number of events the number of possible values for L obs i (n = n toy i ) is smaller than it is for regions with high n b j .
We can conclude from this validation study that our procedure to use the most powerful signal region for each sampled value of the MSSM-15 parameter space while normalising the likelihood via the expected value of the other signal regions leads to no large bias.

B.2 Signal simulation validation
For the validation of the event and detector simulations we adopted two different ATLAS analyses. For both analyses we cross-checked the resulting event selection efficiencies of our simulation done with PYTHIA 6.4 [81] and DELPHES3.1 [90] against the corresponding ATLAS acceptance times efficiency values.
To cover a broad spectrum of signals the ATLAS SUSY searches with zero leptons [79] and with 3 leptons [80] were chosen. Both analyses use a total integrated luminosity of 4.7 fb −1 of data taken at √ s = 7 TeV.

B.3 Comparison of efficiencies
To validate our simulation setup the relative efficiency difference between our setup and the official ATLAS analyses was determined. Here (Aε) ATLAS is the acceptance times efficiency of the ATLAS analyses. A negative value of ∆ε ε corresponds to an overestimation of the efficiency by the simulation, a positive value to an underestimation.
For the validation the default ATLAS detector card supplied with DELPHES 3.1 was modified. For both analyses the value of the jet cone parameter R of the anti-k t jet algorithm was set to 0.4. For the 3-lepton analysis the lepton efficiencies were increased and the lepton isolation value set to 0.7.
For the ATLAS 0-lepton analysis the validation was done in a cMSSM-grid with tan β = 10, while m 0 runs from 100 GeV to 4180 GeV, m 1,2 from 60 GeV to 750 GeV. The comparison was done for each signal region individually. The results are shown in figures 13-15. The minimum value was fixed for all plots due to some large deviations in regions with efficiencies close to zero, as indicated by the color scale. Grid points with values below the minimum are shown in white. For the large areas with value zero in the upper and lower right corner no data points were given by the ATLAS analyses.
For the 3-lepton analysis the validation was done for a simplified model where only the masses of the neutralinos, charginos and sleptons are free parameters and theχ ± 1 and χ 0 2 decay to W and Z bosons. The employed grid has values of 70 GeV to 350 GeV for mχ±   for ∆ε ε in the likelihood, eq. (A.4) assuming that the ∆ε ε can be parameterized as a function of the efficiency.

B.4 Comparison with official ATLAS result for the cMSSM case
For completeness, we also validated our likelihood construction in the cMSSM framework and compared it with the results of the ATLAS Collaboration.
We computed the C.L. s to estimate the observed exclusion limits using the prescription outlined in ref. [107] which uses the concept of Asimov data and Wilk's theorem for its efficient evaluation. The results are shown in the left-panel of figure 21. The continuousred line represents our estimated exclusion limits at 95% C.L., whereas the region between the dash-dotted gray lines gives the ATLAS Collaboration exclusion limit, accounting for    Mean efficiency value with standard deviation for signal region D and E loose/medium/tight of the 0-lepton analysis.   In the left panel, we show the 95% C.L. observed exclusion limit for the 0-lepton analysis in the cMSSM from our likelihood construction (red line) and the C.L. s method, which is in remarkably good agreement with the ATLAS result [79]. The band limited by the gray dashdotted lines is the exclusion limit by the ATLAS collaboration, accounting for uncertainties in the SUSY production cross-section. On the right, we show the normalized full log-likelihood. uncertainties in the determination of the SUSY production cross section. The agreement is very satisfactory, indicating that both the signal prediction procedure and the likelihood construction we adopted work remarkably well.
In the right-panel of figure 21 we display the shape of the full-log likelihood function in our setup.

C Statistical convergence
By describing our profile likelihood maps as "statistically convergent" we mean that our scans can be considered to have converged to a stable, robust mapping of the PL confidence levels. We do not intend to attach any coverage property to our PL intervals, as this would require a dedicated study well beyond the scope of the present paper.
In order to demonstrate the stability of our CL, and in particular the stability of the best-fit point on which CL are predicated, we have split our combined sample for the "All data case" into 5 sub-samples, of the same size. We have performed a separate PL analysis of each subsample, and compared it with the PL obtained from the total sample. The purpose is to investigate whether the PL obtained from different subsample is comparable -49 -  Figure 22. Comparison of 1D PL derived from all samples (black, thick line) and subsets of 20% of the samples (coloured, thin line) for a few selected quantities of interest for the global fits analysis including all data except LHC SUSY searches and Higgs couplings. The black encircled cross is the global best-fit (from all the samples combined). Top row: "All data case"; middle row: "without g −2"; bottom row: "Planck upper limit". To highlight the comparison into the tails, we plot minus twice the log profile likelihood (i.e., the effective chi-squared).
(within numerical noise) with the one from the total sample. If this is the case, then we can conclude that increasing the sample size by a factor of 5 (from each subsample to the total sample) does not appreciably alter our statistical results, and hence we can deem our scans to be "converged" (in this heuristic sense).

JHEP09(2014)081
a Gaussian likelihood is imposed), we can see that there is hardly a difference between the global samples PL and the PL obtained from each subsample. For other variables, which are only indirectly constrained (such has the neutralino mass), the subsamples and the global samples remain very close, albeit with a slightly larger numerical noise in the tails (roughly, beyond the 3σ level).
One might naively expect that the PL from all the samples (black, thick line)having been profiled over all samples included in each of the subsamples -ought to always be the absolute minimum with respect to the PL obtained from the subsamples. This is generally the case, although occasionally the effective chi-squared from a subsample happens to dip below the global sample effective chi-squared (e.g., M 2 for the "without g − 2 case"). This can be understood by recalling that the PL for each of the subsamples is determined with respect to the best-fit chi-squared value for that subsample alone. If the best-fit chi-squared from a subsample is slightly worse (i.e., higher in value) than the global best fit, the effective chi-squared for that subsample can dip below the global value. This however only happens in cases where the PL is very flat, which makes it more prone to numerical fluctuations in the subsamples.
Finally, we have checked that the best-fit coordinates obtained from each of the 5 subsamples is always within the 1σ CL obtained from the global PL (one dimension at the time). This indicates that the best-fit value is stable with respect to the sample size.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.