Testing Higgs boson scenarios in the phenomenological NMSSM

There may be more than one Higgs boson present near 125 GeV which can not be resolved with current energy resolution at the large hadron collider (LHC). This is the case for beyond the standard models (BSM) such as the next to the minimal supersymmetric standard model (NMSSM). In order to assess the extent to which current collider, cold dark matter relic density and direct detection limits are capable of discriminating NMSSM scenarios with two mass-degenerate Higgs bosons compared to the case with a single Higgs boson around 125 GeV, we perform global fits of weak-scale phenomenological NMSSM (pNMSSM) with 26 parameters. This is done using the nested sampling implementation in {\sc PolyChord} which is a next-generation tool for Bayesian inference. The Bayesian comparison analyses indicate that the combined collider and cosmological data used favours the pNMSSM scenario where the second to the lightest CP-even Higgs boson is identified as the discovered scalar around 125 GeV. The lightest CP-even Higgs boson is non SM-like. Its mass posterior distribution is centred around 50 GeV. More stringent constraints, perhaps from the LHC searches for BSM Higgs bosons, are needed towards decisive conclusions regarding various possibilities for 125 GeV BSM Higgs bosons.

(see also the work in Ref. [50]) for the interpretation of the Bayes factors. The Bayes factors, as explained below, could also quantify the relative posterior probability masses for the scenarios compared which are a priori equally likely.

A. Bayes theorem and Bayes factors
For a given context, H, based on a model with a set of N parameters, θ, the a priori assumed values which the parameters can take is encoded in the prior probability density distribution, p(θ). The support a given hypothesis could draw from a data set is quantified by the probability density of observing the data set given the hypothesis, This can be obtained directly from Bayes' theorem, Here p(d|θ, H) is the likelihood, a measure of the probability for obtaining the data set d from a given set of the model parameters. To compare between say, H 0 and H 1 the Bayes factor, K = Z1 Z0 , should be computed. This could be done via the the posterior odd ratios: For the case where the two hypotheses are a priori equally likely, p(H 1 )/p(H 0 ) = 1, then the logarithm of the Bayes factor can be obtained as the logarithm of the posterior odd factors: ∆ log e Z = log e p(H 1 |d) p(H 0 |d) = log e Z 1 Z 0 .
Getting Z 1 /Z 0 > 1 will infer that the data supports H 1 more compared to H 0 and vice versa if the ratio is less than one. The Jeffrey's scale shown in Table I Assuming that the scenarios represent mutually exclusive volumes, Ω 0 , Ω 1 and Ω 2 respectively, in the "full" θ-space then the corresponding posterior probability masses can be computed. For instance, The global evidence, Z, will cancel out when computing the ratios, such as . So the evidence and posterior mass ratios are equivalent. As such, the priors for scenarios H 0,1,2 are not free to be chosen arbitrarily. They are rather set by the prior distribution p(θ), with the relative priors such as p(H0) p(H1) constrained to match the corresponding integrations of p(θ|H 0,1 ) over the domains Ω 1,2 . The prior ratios can be estimated by scanning over the pNMSSM parameters without imposing the likelihoods or scenario requirements and then find the number of survived points after imposing the Higgs boson(s) mass restrictions. Thus p(H i ), i = 0, 1, 2 can be considered to be the fraction of the survived points.
For the nested sampling implementation in PolyChord, parameter points were sampled from a flat prior distribution, p(θ), which integrates to 1 over the "full" pNMSSM θ-space. This "global" prior is used for each of the three scenarios considered. So PolyChord is run once for each of the scenarios. From the sampled parameter points, the with p(θ) dθ = 1, since in principle p(θ) can be expanded as This way, the ratios such as p(H1) represents the full posterior mass ratio as in Eq.(3). Thus the Bayes factor, K can be obtained from what PolyChord returns (the Z s) as

B. PolyChord versus MultiNest
Here we briefly describe the similarities and contrast between the relatively new PolyChord [41,42] and the MultiNest algorithm [40,43] which we have used in the past for similar analyses. Both MultiNest and PolyChord are effective Bayesian evidence calculators that perform as excellent multi-modal posterior samplers. At their core, "nested sampling" algorithm [38] is implemented. They differ on how a new set of model parameters is generated over sampling iterations. MultiNest is based on rejection sampling or, alternatively, importance sampling. On the other hand, PolyChord is based on slice sampling method.
We used PolyChord because of its improved scaling with dimensionality, D, as illustrated in [42]. For the multidimensional Gaussian problem analysed in [42]   MultiNest are summarised in Table II. Given the experience in using MultiNest for large parameter models (order 20 to 30), especially the difficulties in getting runs to finish over tightly constrained or models with high number of parameters, we decide to use PolyChord. [174] III. THE PHENOMENOLOGICAL NMSSM The NMSSM, for reviews see e.g. [3][4][5], has phenomenological advantages over the MSSM. These include the solution of the µ-problem [52]. The vacuum expectation value of an additional gauge-singlet (S) can generate superpotential µ-term dynamically. It also has a richer Higgs-sector. There are three CP-even Higgs bosons, h 1,2,3 , and two CP-odd Higgs bosons a 1,2 which are mixtures of the MSSM-like Higgs doublet fields and respectively the real or imaginary part of S. For our analyses, we shall consider an R-parity conserving NMSSM with minimal CP and flavour violating free parameters and superpotential, where W M SSM is the MSSM-like superpotential without the µ-term, Here, the chiral superfields have the following SU The corresponding soft SUSY-breaking terms are with the trilinear and bilinear contributions given by A tilde-sign over the superfield symbol represents the scalar component. However, an asterisk over the superfields as in, for example,ũ * R represents the scalar component ofŪ . The SU (2) L fundamental representation indices are donated by a, b = 1, 2 while the generation indices by i, j = 1, 2, 3. 12 = 12 = 1 is a totally antisymmetric tensor.
In a similar approach to the pMSSM [22,[24][25][26] construction, the pNMSSM parameters are defined at the weak scale. For suppressing sources of unobserved CP-violation and flavour-changing neutral currents, the sfermion mass and trilinear scalar coupling parameters were chosen to be real and diagonal. For the same motivation, the first and second generation sfermion mass parameters were set to be degenerate. The gaugino mass parameters were reduced to be real by neglecting CP-violating phases. These lead to a non-Higgs sector set of parameters Here, M 1,2,3 and mf are respectively the gaugino and the sfermion mass parameters. A t,b,τ represent the trilinear respectively, to the diagonalised matrices T U , T D , and T L . Here Y represent the Yukawa matrices. After electroweak symmetry breaking, the vacuum expectation value (vev) of S, v s , develops an effective µ-term, µ eff = λ v s . This and the ratio of the MSSM-like Higgs doublet vevs, tan β = H 2 / H 1 , are free parameters which together with mass of the Z-boson, m Z , can be used for computing m 2 H1,2,S via minimisation of the scalar potential. With these, the tree-level Higgs sector parameters are Adding to the list of parameters in Eq. (15) and Eq. (16), four SM nuisance parameters, namely, the top and bottom quarks m t,b , m Z and the strong coupling constant, α s , makes the 26 free parameters of the pNMSSM: M 1,2 strongly affect the electroweak gaugino masses for which a wide range of values, GeV to TeV, is possible.
We let M 1 ∈ [−4, 4] TeV and same for M 2 but fixed to be positive without loss of generality (see e.g. [167]). A strong sensitivity of the pNMSSM Higgs sector on the gluino and the 1st/2nd generation squark mass parameters is not anticipated. However we choose to let them vary since the limits from searches for SUSY will be part of the experimental data to be used. As such, following the work in [24,26] we let the gluino and squark mass parameters vary within [100 GeV, 4 TeV] and the trilinear scalar couplings within [−8 TeV, 8 TeV]. tan β is allowed to vary between 2 and 60. With the aim of minimising fine-tuning, we subjectively choose to vary the effective µ-parameter, µ eff = λ v s , to be within 100 to 400 GeV and not (orders of magnitude) far away from the m Z . The remaining Higgssector parameters were allowed in ranges as summarised in Table III. For all the parameters, except the SM ones, flat prior probability density distribution was assumed. For the experimentally measured SM nuisance parameters, Gaussian distributions around the measured values were used.  Checking the prior-dependence of results is useful for assessing the strength of the data in constraining the model in an unambiguous manner. The priors can be chosen to be flat or logarithmic with the latter favouring lower regions of the parameter ranges. There are two bottle-necks concerning our attempts for sampling the pNMSSM parameters with logarithmic priors. On one hand, the absence of signatures for SUSY at the LHC pushes sparticle mass lower bounds towards or well into the multi-TeV regions. Therefore, sampling the phenomenologically viable parameters according to a logarithmic prior will be difficult and computationally expensive. For the attempted log-prior fits, only parameters that do not cross zero were sampled logarithmically. Those that have the possibility of being zero were sampled uniformly. On the other hand, for the nested sampling algorithm in PolyChord to get started, a 200-points sample of the 26-parameters pNMSSM is required. By using logarithmic priors, it was not possible to generate the 200 model points within the maximal possibility of 3072 CPU core-hours per run at our disposal [175]. Thus we restrict our analyses to the flat priors only. The conclusions presented in this article are valid only within this context.
Another issue concerning our pNMSSM parametrisation is related to "naturalness" or the avoidance of excessive fine-tuning associated with obtaining the correct weak-scale (m Z ). With the naturalness prior parametrisation [64], the fine-tuning can be avoided by directly scanning the parameters m Z and tan β rather than m H1 an m H2 . Moreover it was shown [37] that naturalness prior parametrisation could significantly affect the Bayesian evidence values. For the analyses presented in this article, such parametrisation was not considered. Rather, a Gaussian prior for m Z centred on the measured value was used. Doing this injects information about what the weak scale is into the prior.
In addition, µ ef f were chosen to be near the electroweak symmetry breaking scale, between 100 to 400 GeV, since one of our aims is to show that there are still vast regions in parameter space with low mass gauginos and BSM Higgs bosons which are not ruled out by current data. It is accepted that fine-tuning penalisation manifests implicitly and automatically within Bayesian global fits as presented in [33,58,59]. The same applies to the fact that fine-tuning limits could be imposed during fits by using the various fine-tuning measures [60][61][62][63][64][65][66]. Using any of these is not within the scope of our present analyses given the deliberate target to regions with low mass electroweak gauginos. Now, coming back to the Higgs sector potential, the details concerning the NMSSM Higgs mass matrices and couplings can be found in the literature, for example see [4,[53][54][55][56][57]. The Z 3 -invariant NMSSM Higgs potential can be obtained from the SUSY gauge interactions, soft-breaking and F -terms as Here g 1 and g 2 denotes the U (1) Y and SU (2) leads to the realisation of CP-even Higgs boson mixing matrix Here the physical Higgs fields have indices R for the CP-even, and indices I for the CP-odd states.
For the first case, a pNMSSM point with corresponding signal strength µ i is associated with the likelihood Here i runs over the various categories of Higgs boson production and decay modes combinations.μ i ± ∆μ i represents the experimentally determined signal strengths. Theoretically, the signal strength associated to a model point can be computed, for a given production mode X and decay mode Y as Here X,Y are the experimental efficiencies, X ∈ {ggH, V H, V BF, ttH} and Y ∈ {γγ, V V ( * ) , bb, τ τ, ttH}. For a proton-proton collider, the elements in X represent: the gluon-gluon fusion (ggH), associated production with a boson (VH), vector boson fusion (VBF) or associated production with top quarks (ttH). The elements in Y represent the Higgs diphoton (γγ), W or Z bosons (V V ), bottom quarks (bb) or tau leptons (τ τ ) decay modes. Now, computing µ as in Eq.(24) could be impractical since for a meaningful theory versus experiment comparison, the non-SM predictions in the numerator should be computed using the same prescriptions such as the order in perturbation, implementation of parton distribution functions etc. The second approach, whereby the input to Lilith are the reduced couplings does not suffer from this problem. BSM physics effects can be parametrised in terms of the reduced couplings. The cross section (or partial decay width) for each production process X (or decay mode Y ) can be scaled [155] with a factor of C 2 X and C 2 Y respectively such that The reduced couplings computed from the NMSSMTools together with their invisible and undetectable decay branching ratios can then be passed to Lilith for computing the likelihood based on Another set of limits were used for fitting the pNMSSM. These were not included during the global fit samplings.
Instead, the limits implemented in SModelS [84][85][86][87][88][89][90][91][92][93][94] and HiggsBounds [106,107] were applied to the posterior samples from the pNMSSM fits to the data on the upper section of Table IV. The inclusion of SModelS constraints during the fits will slow the exploration of the pNMSSM space beyond tolerance. For this reason, the "post-processing" procedure was used. The "post-processing" means passing the posterior sample points, in SLHA [108]

A. Bayesian evidences
The results for the Bayesian comparisons between the hypotheses considered are shown in Table V Table V. For the third set of the results, we discuss the comparisons between the scenarios as follows.
• The data shows an inconclusive result for the comparison between H 0 against H 1 . This an indication that the CDM DD limits on one hand, versus the Br(B s → µ + µ − ), Br(B u → τ ν) and δa µ set of limits on another are sensitive to these scenarios but in opposite directions. This can be seen by noting that the support for H 1 against H 0 changed from Strong to Moderate and then to Inconclusive in going from the first to the second and then third set of results shown in Table V.
• Similarly, the data demonstrates a moderate evidence in support of H 2 against H 1 . For the H 2 versus H 1 comparison the evidence changes from Moderate to Strong upon the inclusion of CDM direct detection limits.
This can be understood as being due to the presence of relatively much lighter h 1 in the H 2 scenario (see Fig. 1, first-row left plot). Lighter h 1 leads to bigger LSP-nucleon cross sections and thus more likely to be ruled out by the direct detection limits.
• The data has a Moderate support for H 2 relative to H 0 . Over the first two sets of results, the support for H 2 against H 0 is Strong. The inclusion of the Br(B s → µ + µ − ), Br(B u → τ ν) and δa µ limits diluted the conclusion to a Moderate one.
All together, there is a Moderate support for H 2 against H 0 and the latter may be considered as being ranked first in comparison to H 0 or H 1 which can be simultaneously ranked second. The hypothesis with a single Higgs boson around 125 GeV is not decisively supported. The data has a tendency towards preferring the H 2 scenario which permits the possibility of having an additional but much lighter (than 125 GeV) scalar.

B. Posterior distributions related to CP-even Higgs bosons
To complement the Bayes' factors reported on Table V, in this subsection we describe the posterior distributions of the first two light Higgs boson masses and couplings. The plots [176] in Fig. 1 show respectively the one-and two-dimensional mass distributions within each of the scenarios considered. For H 2 , m h1 distribution is peaked at a value much less than 125 GeV. This is because of the interplay between the large electron-positron (LEP) and LHC constraints. With respect to the position of the peak, the lower mass region is suppressed by LEP constraints such as upper limits on the cross section of e − + e + → h 1 Z [168,169]. The heavier mass region beyond the peak gets ruled out by the upper limits on the reduced production cross sections, at the LHC, such as for p p → h 1 → a 1 a 1 [173], ggF → h 1 → γ γ [170], and ggF → h 1 → τ τ [171,172] processes.
The nature of the two lightest CP-even Higgs bosons in each of the three scenarios can be determined by considering their reduced couplings to fermions (up-type, u and down-type, d) and gauge bosons (W, Z, and γ). As shown in Here The neutralino LSP is considered to be gaugino-, Higgsino-or singlino-like when p 1 = N 2 11 + N 2 12 , p 2 = N 2 13 + N 2 14 , or p 3 = N 2 15 dominates respectively. The posterior distributions for these are shown in Fig. 3. It resulted in the fact that for H 0 (dashed/green line), the LSP is mixed gaugino-Higgsino with approximately zero singlino content. The case is different for H 2 , for which the LSP is mixed Higgsino-singlino but with dominantly singlino and zero gaugino content. Instead, in the case for H 1 the LSP is dominantly Higgsino. The nature of the neutralino LSP composition determines what leading role the annihilation and co-annihilation processes (see, e.g., Ref. [4]) play for getting the relic density around the experimental value, Ω CDM h 2 = 0.12 [134]. It also determines the processes that could be involved for the direct detection of the dark matter candidate. Concerning the latter, the dominant processes are the t-channel Z or Higgs boson exchange for spin dependent or independent interactions, respectively. For instance, highly singlino-like LSP leads to small spin dependent cross section. The application of the dark matter direct detection limit will therefore lead to a posterior distribution with dominantly singlino LSP as is the case within H 2 .
In Fig.4 (first-row) the posterior distributions of the spin-independent and spin-dependent LSP-proton scattering cross sections compatible with all the considered collider, astrophysical, including the CDM direct detection bounds reported in [157][158][159][160][161][162][163], and flavour physics constraints are shown. The sudden suppressions in the direct detection cross sections occur at points for which there are cancellations from the neutralino-neutralino-Higgs interaction terms [164].
The second-row of plots in Fig.4 show the DM direct detection limits used for the global fits. Regions above the contour constraining. For instance the regions above the blue/dots line in Fig.4 (first-row left) would have been excluded.
They were not excluded because the relic densities at those points are much less than the experimentally measured central value around 0.1 as shown in Fig.4 (third/last plot).

VI. CONCLUSIONS
The nested sampling technique [38] implementation in PolyChord [41,42] has been applied for computing the Bayesian evidence within a 26-parameters pNMSSM. The evidence values are based on limits from collider, astrophysical bounds on dark matter relic density and direct detection cross sections, and low-energy observables such as muon anomalous magnetic moment and flavour physics observables. These were used for comparing between three pNMSSM hypotheses: • H 0 : The scenario for which the observed scalar around 125 GeV is identified as the lightest CP-even Higgs boson, h 1 . m h1 were allowed according to a Gaussian distribution with 3 GeV standard deviation.
• H 1 : This is the same as H 0 but with the restriction that m h2 be within 122 to 128 GeV.
• H 2 : The scenario for which the observed scalar around 125 GeV is identified as the second lightest Higgs boson, Using the Jeffreys' scale for interpreting the evidence values (see Table I), the analyses indicate that H 2 could be considered as being ranked first, with a Moderate support, amongst the three hypotheses. H 0 and H 1 can be considered as ranked second at the same time. That is, the current data used for the Bayesian comparisons favours the hypothesis with the possibility of having an additional but much lighter (than 125 GeV) scalar. The lightest scalar within H 2 turned out to have mass distribution centred below the W-boson mass as shown in Fig. 1. Due to the "sharing of couplings" effect, h 2 is SM-like while h 1 is dominantly singlet-like.
From the posterior distributions presented, pNMSSM benchmark points could be constructed for further analyses with regards to non SM-like Higgs bosons. For instance, consider the two-dimensional posterior distribution on the (λ, κ) plane shown in Fig. 5. For the H 2 hypothesis, the plane is well constrained and approximately reduced to a line. The model points along the line could be excellent benchmarks for testing non-SM Higgs scenarios.
Other possible directions for further investigations can be described as follows. In this article, the composition of the LSP was not fixed. The only requirement was that the relic density and the elastic cross section with nucleons be within the experimentally allowed range. One can go beyond this by demanding, a priori, a particular LSP composition. That is, one could require, in addition to what the masses of the light CP-even Higgs bosons could be, that only a specific LSP composition be allowed during the pNMSSM parameter space explorations. Next, there are experimental measurements which could possibly probe, in a better way, the pseudo-degenerate Higgs scenario.
These include the precise determination of the Higgs boson's total decay width. An update of the analyses presented here to include these ideas could shed more light about the pseudo-degenerate Higgs scenario.
There are caveats within our analyses. One is concerning the uncertainty of the Higgs mass prediction. Here we have used the traditional, and possibly too optimistic uncertainty of 3 GeV. A more careful analysis and systematic treatment of the uncertainties, see e.g. [165,166], could significantly impact the Bayesian evidence values. This is also the case should naturalness priors be used for the analyses. In [37], it was shown that imposing naturalness requirements significantly affect the evidence. The correlations among the various observables were not included.
Whenever available, the inclusion of correlations could possibly alter the pNMSSM posteriors. For instance, the measurements of the Higgs boson mass and couplings could come from a single experiment and therefore likely to be correlated. Finally, the inclusion of SUSY limits, such as in SmodelS, during the fits and using logarithmic priors should lead to more robust conclusions about the pNMSSM.