Testing Higgs boson scenarios in the phenomenological NMSSM

There could be another scalar in nature quasi-degenerate with the observed one (h125\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h_{125}$$\end{document}). This is possible in models such as the Next-to-Minimal Supersymmetric Standard Model (NMSSM). The scenario(s) with a single Higgs boson can be compared to that with multiple ones, all near 125GeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$125 \,\text {GeV}$$\end{document}. In order to assess the extent to which the current set of collider, cold dark matter relic density and direct detection limits are capable of discriminating these scenarios, we perform, for the first-time, global fits of a weak-scale phenomenological NMSSM with 26 free parameters using the nested sampling implementation in PolyChord, a next-generation tool for Bayesian inference. The analyses indicate that the data used shows a moderate tendency for supporting the scenario with an additional scalar much lighter than h125\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h_{125}$$\end{document} with mass distribution centred below the W-boson mass. More stringent constraints are, however, needed for decisive inference regarding an additional Higgs boson with mass much less than or near 125GeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$125 \,\text {GeV}$$\end{document}.


Introduction
The Standard Model (SM) of particle physics predicted the existence of a neutral scalar particle, the Higgs boson, with an unknown mass. After decades of technological and experimental developments since the prediction, eventually such a particle with mass near 125 GeV was discovered at the Large Hadron Collider (LHC) [1,2]. This discovery completed the SM as a successful quantum field theory description of the electroweak and strong interactions. However, to date, there remained observations and theoretical indications of physics beyond the SM (BSM). Supersymmetry(SUSY)-based BSM, such as the Next-to-Minimal Supersymmetric Standard Model (NMSSM) [3][4][5], have spectra with multiple neutral scalar particles. A lot of a e-mail: abdussalam@sbu.ac.ir theoretical and phenomenological work within the NMSSM framework have been done by various groups. To mention a non-exhaustive selection, these include global fits of the model's sub-spaces along experimental constraints [6][7][8][9][10], studies on its relation to baryogenesis [11][12][13][14], phenomenological comparisons [15][16][17][18] and vacuum stability analyses [19][20][21]. In this article we analyse, for the first time, a weak-scale NMSSM with many free parameters in contrast to the analogue at the grand unification theory scale which have been typically studied the most. We call this scenario the "phenomenological NMSSM" (pNMSSM), as it is constructed with similar motivations to the well-studied pMSSM [22][23][24][25][26].
Within the pNMSSM, various Higgs sector scenarios can be considered depending on how the observed 125 GeV scalar at the LHC, which we shall label as h 125 , is identified and on how the other Higgs boson masses are restricted. Within the literature, the scenario studied the most is the ordinary case for which the lightest CP-even scalar, h 1 , is identified as h 125 . Here we consider three scenarios, H 0 , H 1 , and H 2 , which are defined as follows: • H 0 : h 1 ≡ h 125 125 with the restriction that m h 2 ∼ m h 3 ∼ m h 1 . This and other potentially interesting further possibilities will not be addressed in this article.
Here h i , with i = 1, 2, 3, represent the three massordered CP-even Higgs bosons. Our proposal is that by using Bayesian models comparison technique, one can find out which of the alternative hypotheses is supported most and, from another perspective, one can assess the status of the pNMSSM in light of the experimental data used. This method has been successfully applied in various particle physics phenomenology [26][27][28][29][30][31][32][33][34][35][36][37] and to greater extent in cosmology and astro-particles research. For instance, see [38][39][40][41] and their citations. For our analyses, we use the PolyChord [41,42] "next-generation" (to MultiNest [40]) nested sampling implementation for making the comparisons.
The aim of this article is to determine which of the scenarios, H 0 , H 1 and H 2 , is supported most by current data. Performing such a comparison analyses will add new directions to previous studies concerning quasi-degenerate Higgs boson scenarios such as in [9,16,[43][44][45][46]. Likewise, the strength of the data in constraining the pNMSSM can be quantified. New benchmarks and perhaps experimentally unexplored pNMSSM regions can be extracted for future investigations along the searches for BSM physics. In the sections that follow, we first describe the Bayesian models comparison technique which makes the base for our analyses. This will be followed by a description of the weak-scale parametrisation, the procedure for fitting the parameters to data, and then the results of the comparisons made. The article ends with a Conclusions section.

Bayesian model selection
Using the nested sampling algorithm, the Bayesian evidence based on a given set of data can be readily computed and thereafter used for comparing alternative physics scenarios. With this algorithm, the parameters estimation is a byproduct of the evidence computation. This is a unique advantage of nested sampling in contrast to traditional Monte Carlo techniques.
There are various possibilities for performing Bayesian models comparison. One can perform a comparison between two completely different physics models based on a common set of data. For example in [28] various SUSY-breaking mediation mechanisms were compared. Another possibility is the comparison of alternative physics scenarios within a single model. Examples of this can be found in [27,29]. In [29], the comparison was between the MSSM scenario whereby the neutralino lightest sparticle (LSP) is considered to make all of the observed cold dark matter relic density compared to the alternative for which the LSP accounts for only part, not all, of the observed relic density. Along this line of thought, the pNMSSM scenarios H 0 , H 1 and H 2 , can be compared among one another based on a common set of experimental data. In the subsections that follow we briefly describe Bayes' theorem, the Bayes factors which were used for the comparisons, and the similarities/differences between PolyChord and MultiNest.

Bayes theorem and Bayes factors
For a given context, H, based on a model with a set of N parameters, θ , the a priori assumed values which the parameters can take is encoded in the prior probability density distribution, p(θ ). The support a given hypothesis could draw from a data set is quantified by the probability density of observing the data set given the hypothesis, This can be obtained directly from Bayes' theorem, Here p(d|θ, H) is the likelihood, a measure of the probability for obtaining the data set d from a given set of the model parameters. To compare between say, H 0 and H 1 the Bayes factor, K = Z 1 Z 0 , should be computed. This could be done via the the posterior odd ratios: For the case where the two hypotheses are a priori equally likely, p(H 1 )/ p(H 0 ) = 1, then the logarithm of the Bayes factor can be obtained as the logarithm of the posterior odd factors: Getting Z 1 /Z 0 > 1 will infer that the data supports H 1 more compared to H 0 and vice versa if the ratio is less than one. The Jeffrey's scale shown in Table 1 calibrates the significance of the Bayes factors. Next we are going to explain what the Bayes factors describe within the context of the pNMSSM global fits to data. From a point of view, the Bayes factor encodes information about the scenarios' posterior masses as measured by the chosen priors. It can tell which scenario is more plausible based on a given set of data. To see this, let the pNMSSM posterior without restricting to any of the H 0 , H 1 , or H 2 Table 1 Jeffreys' scale [47] (see also the work in Ref. [48]) for the interpretation of the Bayes factors. The Bayes factors, as explained below, could also quantify the relative posterior probability masses for the scenarios compared which are a priori equally likely Assuming that the scenarios represent mutually exclusive volumes, 0 , 1 and 2 respectively, in the "full" θ -space then the corresponding posterior probability masses can be computed. For instance, The global evidence, Z, will cancel out when computing the ratios, such as So the evidence and posterior mass ratios are equivalent. As such, the priors for scenarios H 0,1,2 are not free to be chosen arbitrarily. They are rather set by the prior distribution p(θ ), with the relative priors such as p(H 0 ) p(H 1 ) constrained to match the corresponding integrations of p(θ |H 0,1 ) over the domains 1,2 . The prior ratios can be estimated by scanning over the pNMSSM parameters without imposing the likelihoods or scenario requirements and then find the number of survived points after imposing the Higgs boson(s) mass restrictions. Thus p(H i ), i = 0, 1, 2 can be considered to be the fraction of the survived points.
For the nested sampling implementation in PolyChord, parameter points were sampled from a flat prior distribution, p(θ ), which integrates to 1 over the "full" pNMSSM θ -space. This "global" prior is used for each of the three scenarios considered. So PolyChord is run once for each of the scenarios. From the sampled parameter points, the H 0 , H 1 , or H 2 cuts on the Higgs boson masses were applied. For instance, in the case of the PolyChord run for H 0 the sampled pNMSSM points were discarded if m h 1 / ∈ [122,128] GeV and H 1,2 , m h 2 / ∈ [122,128] GeV. As such the evidence value returned by PolyChord will be with . This way, the ratios such as represents the full posterior mass ratio as in Eq. (3). Thus the Bayes factor, K can be obtained from what PolyChord returns (the Z s) as

PolyChord versus MultiNest
Here we briefly describe the similarities and contrast between the relatively new PolyChord [41,42] and the MultiNest algorithm [40,49] which we have used in the past for similar analyses. Both MultiNest and PolyChord are effective Bayesian evidence calculators that perform as excellent multi-modal posterior samplers. At their core, "nested sampling" algorithm [38] is implemented. They differ on how a new set of model parameters is generated over sampling iterations. MultiNest is based on rejection sampling or, alternatively, importance sampling. On the other hand, PolyChord is based on slice sampling method. We used PolyChord because of its improved scaling with dimensionality, D, as illustrated in [42]. For the multidimensional Gaussian problem analysed in [42], the number of likelihood calculations needed for the run to converge scales as O(D 3 ) at worst using PolyChord instead of the approximately exponential scaling that emerges for higher dimensions (greater than around 20) as is expected for the rejection sampling method (see Sect. 29.3 of [50]). The 26dimensional pNMSSM considered for the analysis presented in this article is more complicated in comparison to the toy Gaussian problem.
There are two tuning parameters for running PolyChord, namely the number of live points maintained throughout the nested sampling implementation, n live , and the length of the slice sampling chain, n repeats , used for generating new live points. With n live = 200 and n repeats = 26, running Poly-Chord on the pNMSSM parameters space finished with 700208 likelihood calculations using 96 core-hours of computing time. This compares to 11344428 likelihood calculations for the same pNMSSM model with MultiNest tuning parameters n live = 5000 and e f r = 0.1 using 4480 core-hours. Making n live = 1000 instead of n live = 200, the amount of time and likelihood calculations needed to finish the PolyChord run increased drastically but ends up with similar results for the Bayesian evidence. It took 6272 core-hours and 3492331 likelihood calculations. Setting n repeats = 2×26 instead of n repeats = 26, PolyChord finished with 1632936 likelihood calculations and 512 corehours. These basic comparison between PolyChord and MultiNest are summarised in Table 2. Given the experience in using MultiNest for large parameter models (order 20 to 30), especially the difficulties in getting runs to finish over tightly constrained or models with high number of parameters, we decide to use PolyChord. 1 1 Note: for the comparison performed here, the aim is about getting correct evidence values. No special attention were made for comparing the uncertainties on the Evidence coming from the codes. The Poly-Chord's "precision criterion" used is 10 −3 while the MultiNest's "tolerance" parameter was set to 0.5. These code parameters affect, but

The phenomenological NMSSM
The NMSSM, for reviews see e.g. [3][4][5], has phenomenological advantages over the MSSM. These include the solution of the μ-problem [51]. The vacuum expectation value of an additional gauge-singlet (S) can generate superpotential μterm dynamically. It also has a richer Higgs-sector. There are three CP-even Higgs bosons, h 1,2,3 , and two CP-odd Higgs bosons a 1,2 which are mixtures of the MSSM-like Higgs doublet fields and respectively the real or imaginary part of S. For our analyses, we shall consider an R-parity conserving NMSSM with minimal CP and flavour violating free parameters and superpotential, where W M SSM is the MSSM-like superpotential without the μ-term, Here, the chiral superfields have the following Footnote 1 continued in different ways (see Sect. 5.4 of [40]), the stopping criterion for the nested sampling and the error on the evidence. For the basic comparison made here between the codes no attempt were made for achieving similar deviations on the evidence values.
The corresponding soft SUSY-breaking terms are with the trilinear and bilinear contributions given by A tilde-sign over the superfield symbol represents the scalar component. However, an asterisk over the superfields as in, for example,ũ * R represents the scalar component ofŪ . The SU (2) L fundamental representation indices are donated by a, b = 1, 2 while the generation indices by i, j = 1, 2, 3. 12 = 12 = 1 is a totally antisymmetric tensor. In a similar approach to the pMSSM [22,[24][25][26] construction, the pNMSSM parameters are defined at the weak scale. For suppressing sources of unobserved CP-violation and flavourchanging neutral currents, the sfermion mass and trilinear scalar coupling parameters were chosen to be real and diagonal. For the same motivation, the first and second generation sfermion mass parameters were set to be degenerate. The gaugino mass parameters were reduced to be real by neglecting CP-violating phases. These lead to a non-Higgs sector set of parameters Here, M 1,2,3 and mf are respectively the gaugino and the sfermion mass parameters. A t,b,τ represent the trilinear scalar couplings, with Adding to the list of parameters in Eqs. (15) and (16), four SM nuisance parameters, namely, the top and bottom quarks m t,b , m Z and the strong coupling constant, α s , makes the 26 free parameters of the pNMSSM: M 1,2 strongly affect the electroweak gaugino masses for which a wide range of values, GeV to TeV, is possible. We let M 1 ∈ [−4, 4] TeV and same for M 2 but fixed to be positive without loss of generality (see e.g. [52]). A strong sensitivity of the pNMSSM Higgs sector on the gluino and the 1st/2nd generation squark mass parameters is not anticipated. However we choose to let them vary since the limits from searches for SUSY will be part of the experimental data to be used. As such, following the work in [24,26] we let the gluino and squark mass parameters vary within [100 GeV, 4 TeV] and the trilinear scalar couplings within [−8 TeV, 8 TeV]. tan β is allowed to vary between 2 and 60. With the aim of minimising fine-tuning, we subjectively choose to vary the effective μ-parameter, μ eff = λ v s , to be within 100 to 400 GeV and not (orders of magnitude) far away from the m Z . The remaining Higgs-sector parameters were allowed in ranges as summarised in Table 3. For all the parameters, except the SM ones, flat prior probability density distribution was assumed. For the experimentally measured SM nuisance parameters, Gaussian distributions around the measured values were used.
Checking the prior-dependence of results is useful for assessing the strength of the data in constraining the model in an unambiguous manner. The priors can be chosen to be flat or logarithmic with the latter favouring lower regions of the parameter ranges. There are two bottle-necks concerning our attempts for sampling the pNMSSM parameters with logarithmic priors. On one hand, the absence of signatures for SUSY at the LHC pushes sparticle mass lower bounds towards or well into the multi-TeV regions. Therefore, sampling the phenomenologically viable parameters according to a logarithmic prior will be difficult and computationally expensive. For the attempted log-prior fits, only parameters that do not cross zero were sampled logarithmically. Those that have the possibility of being zero were sampled uniformly. On the other hand, for the nested sampling algorithm in PolyChord to get started, a 200-points sample of the 26-parameters pNMSSM is required. By using logarithmic priors, it was not possible to generate the 200 model points within the maximal possibility of 3072 CPU core-hours per run at our disposal. 2 Thus we restrict our analyses to the flat priors only. The conclusions presented in this article are valid only within this context.
Another issue concerning our pNMSSM parametrisation is related to "naturalness" or the avoidance of excessive fine-tuning associated with obtaining the correct weak-scale (m Z ). With the naturalness prior parametrisation [53], the fine-tuning can be avoided by directly scanning the parameters m Z and tan β rather than m H 1 an m H 2 . Moreover it was shown [37] that naturalness prior parametrisation could significantly affect the Bayesian evidence values. For the analyses presented in this article, such parametrisation was not considered. Rather, a Gaussian prior for m Z centred on the measured value was used. Doing this injects information about what the weak scale is into the prior. In addition, μ e f f were chosen to be near the electroweak symmetry breaking scale, between 100 to 400 GeV, since one of our aims is to show that there are still vast regions in parameter space with low mass gauginos and BSM Higgs bosons which are not ruled out by current data. It is accepted that fine-tuning penalisation manifests implicitly and automatically within Bayesian global fits as presented in [33,54,55]. The same applies to the fact that fine-tuning limits could be imposed during fits by using the various fine-tuning measures [53,[56][57][58][59][60][61]. Using any of these is not within the scope of our present analyses given the deliberate target to regions with low mass electroweak gauginos. Now, coming back to the Higgs sector potential, the details concerning the NMSSM Higgs mass matrices and couplings can be found in the literature, for example see [4,[62][63][64][65][66]. The Z 3 -invariant NMSSM Higgs potential can be obtained from the SUSY gauge interactions, soft-breaking and F-terms as Here g 1 and g 2 denotes the U (1) Y and SU (2) leads to the realisation of CP-even Higgs boson mixing matrix Here the physical Higgs fields have indices R for the CP-even, and indices I for the CP-odd states. h weak i = (H 1R , H 2R , S R ) represents the interaction, and h mass i , the mass-ordered, eigenstates. The mixing of the SU (2) doublets with the singlet state affects the phenomenology of the Higgs bosons. For instance, the reduced couplings (see, e.g. [66]) of the 3 CP-even mass-eigenstates h i to the electroweak gauge bosons can be very small in some regions of parameter space. The sum rule 3 i=1 ξ 2 i = 1 is always satisfied. The reduced couplings are inputs to the Lilith [67] program for comparing the pNMSSM signal strengths to the experimentally measured values.
In modelling the likelihood, the set of constraints used during the global fits are divided into the three groups: • Constraints on the Higgs boson mass m h , the neutralino cold dark matter (CDM) relic density C DM h 2 , anomalous magnetic moment of the muon δa μ and B-physics related limits summarised in the upper part of Table 4 form the first part of the data set, d. The likelihood is computed from the pNMSSM predictions, O i , corresponding to the constraints i, with experimental central values μ i and uncertainties σ i , as Here the index i runs over the relevant experimental constraints in Table 4.
Here i runs over the various categories of Higgs boson production and decay modes combinations.μ i ± μ i represents the experimentally determined signal strengths. Theoretically, the signal strength associated to a model point can be computed, for a given production mode X and decay mode Y as Here X,Y are the experimental efficiencies, X ∈ {gg H, V H, V B F, tt H } and Y ∈ {γ γ, V V ( * ) , bb, τ τ, tt H }. For a proton-proton collider, the elements in X represent: the gluon-gluon fusion (ggH), associated production with a boson (VH), vector boson fusion (VBF) or associated production with top quarks (ttH). The elements in Y represent the Higgs diphoton (γ γ ), W or Z bosons (V V ), bottom quarks (bb) or tau leptons (τ τ ) decay modes. Now, computing μ as in Eq. (24) could be impractical since for a meaningful theory versus experiment comparison, the non-SM predictions in the numerator should be computed using the same prescriptions such as the order in perturbation, implementation of parton distribution functions etc. The second approach, whereby the input to Lilith are the reduced couplings does not suffer from this problem. BSM physics effects can be parametrised in terms of the reduced couplings. The cross section (or partial decay width) for each production process X (or decay mode Y ) can be scaled [158] with a factor of C 2 X and C 2 Y respectively such that The reduced couplings computed from the NMSSM-Tools together with their invisible and undetectable decay branching ratios can then be passed to Lilith for computing the likelihood based on and the table of likelihood values as a function of μ within the Lilith database of experimental results. This procedure is valid only for Higgs bosons with mass between 123 to 128 GeV. For the multi-Higgs case with masses within this range, such as for H 1 , the combined [159] signal strengths were used. • The third set of constraints in d is the CDM direct detection limits. These are from searches for the elastic scattering of CDM with nucleons. The recoil energy deposited on nuclei in a detector can be measured. In the absence of discovery, upper limits on the scattering cross section can be determined. The cross sections can be either spin-independent (SI) or spin-dependent (SD) depending on whether the LSP-nucleon coupling is via scalar or axial-vector interaction. For the fits with the direct detection limits imposed, only parameter points that pass the SI [120][121][122] and SD [123][124][125][126] limits were accepted.

Bayesian evidences
The results for the Bayesian comparisons between the hypotheses considered are shown in Table 5. There are three Table 5 Here Z represents the evidence returned by PolyChord. There are three set of results demarcated by the double horizontal lines. The first set is for the pNMSSM global fits to the observables shown in Table 4 but without the CDM direct detection (DD), Br(B s → μ + μ − ), Br(B u → τ ν) and δa μ limits. For the second and third sets, the limits from CDM DD searches were added. The third set is done with all the observables included. The inclusion of all the limits significantly changed the discriminating power of the data and p(H 2 ) = 3.5459 × 10 −3 . As expected, the stronger the data set is, the better the discrimination between the scenarios becomes. But this depends on whether all the elements in the set pull the posterior mass towards a common region. The addition of two sets of data with tendencies for pulling the posterior in opposite directions will dilute the discrimination strength of the combined set. This characteristic behaviour can be seen in going from the second to the third set of results shown in Table 5. the Br(B s → μ + μ − ), Br(B u → τ ν) and δa μ set tends to prefer lighter SUSY states. The inclusion of this set diluted the discrimination power of the combined data set. Without the Br(B s → μ + μ − ), Br(B u → τ ν) and δa μ limits, the conclusions drawn are Moderate or Strong. These changed to Inconclusive or Moderate when the mentioned set of data are included as can be seen for the third set of results in Table 5. For the third set of the results, we discuss the comparisons between the scenarios as follows.
• The data shows an inconclusive result for the comparison between H 0 against H 1 . This an indication that the CDM DD limits on one hand, versus the Br(B s → μ + μ − ), Br(B u → τ ν) and δa μ set of limits on another are sen-sitive to these scenarios but in opposite directions. This can be seen by noting that the support for H 1 against H 0 changed from Strong to Moderate and then to Inconclusive in going from the first to the second and then third set of results shown in Table 5. • Similarly, the data demonstrates a moderate evidence in support of H 2 against H 1 . For the H 2 versus H 1 comparison the evidence changes from Moderate to Strong upon the inclusion of CDM direct detection limits. This can be understood as being due to the presence of relatively much lighter h 1 in the H 2 scenario (see Fig. 1, first-row left plot). Lighter h 1 leads to bigger LSP-nucleon cross sections and thus more likely to be ruled out by the direct detection limits. • The data has a Moderate support for H 2 relative to H 0 .
Over the first two sets of results, the support for H 2 against H 0 is Strong. The inclusion of the Br(B s → μ + μ − ), Br(B u → τ ν) and δa μ limits diluted the conclusion to a Moderate one.
All together, there is a Moderate support for H 2 against H 0 and the latter may be considered as being ranked first in comparison to H 0 or H 1 which can be simultaneously ranked second. The hypothesis with a single Higgs boson around 125 GeV is not decisively supported. The data has a tendency towards preferring the H 2 scenario which permits the possibility of having an additional but much lighter (than 125 GeV) scalar.

Posterior distributions related to CP-even Higgs bosons
To complement the Bayes' factors reported on Table 5, in this subsection we describe the posterior distributions of the first two light Higgs boson masses and couplings. The plots 3 in Fig. 1 show respectively the one-and two-dimensional mass distributions within each of the scenarios considered.
For H 2 , m h 1 distribution is peaked at a value much less than 125 GeV. This is because of the interplay between the large electron-positron (LEP) and LHC constraints. With respect to the position of the peak, the lower mass region is suppressed by LEP constraints such as upper limits on the cross section of e − + e + → h 1 Z [161,162]. The heavier mass region beyond the peak gets ruled out by the upper limits on the reduced production cross sections, at the LHC, such as for p p → h 1 → a 1 a 1 [163], gg F → h 1 → γ γ [164], and gg F → h 1 → τ τ [165,166] processes. The nature of the two lightest CP-even Higgs bosons in each of the three scenarios can be determined by considering their reduced couplings to fermions (up-type, u and downtype, d) and gauge bosons (W, Z, and γ ). As shown in Fig. 2, for H 0 (dashed/green line), h 1 is completely SM-like. For H 1 (dash-dotted/red line), h 2 is mostly SM-like when h 1 is not and vice versa. This due to the so-called "sharing of couplings" effect. h 1 is almost completely non SM-like within H 2 (solid/black line) for which h 2 is identified as h 125 . These features stem from the combined effects of the various limits imposed on the pNMSSM parameter space. The most pronounced of these for h 1 are the LHC limits on the signal strengths which are directly proportional to the reduced couplings, and subsequently to the elements of the Higgs mixing matrix.

Posterior distributions related to neutralino CDM candidate
The LSP is identified as a candidate for explaining the observed CDM relic density. Here we show the posterior distributions for the neutralino composition and direct detection cross sections are presented. The nature of the LSP is one of the most important factors affecting its relic density and scattering cross sections. The relevant part of the Lagrangian for the neutralino is Here λ The neutralino LSP is considered to be gaugino-, Higgsinoor singlino-like when p 1 = N 2 11 + N 2 12 , p 2 = N 2 13 + N 2 14 , or p 3 = N 2 15 dominates respectively. The posterior distributions for these are shown in Fig. 3. It resulted in the fact that for H 0 (dashed/green line), the LSP is mixed gaugino-Higgsino with approximately zero singlino content. The case is different for H 2 , for which the LSP is mixed Higgsino-singlino but with dominantly singlino and zero gaugino content. Instead, in the case for H 1 the LSP is dominantly Higgsino.
The nature of the neutralino LSP composition determines what leading role the annihilation and co-annihilation processes (see, e.g., Ref. [4]) play for getting the relic density around the experimental value, C DM h 2 = 0.12 [97]. It also determines the processes that could be involved for the direct detection of the dark matter candidate. Concerning the latter, the dominant processes are the t-channel Z or Higgs boson exchange for spin dependent or independent interactions, respectively. For instance, highly singlino-like LSP leads to small spin dependent cross section. The application of the dark matter direct detection limit will therefore lead to a posterior distribution with dominantly singlino LSP as is the case within H 2 .
In Fig. 4 (first-row) the posterior distributions of the spin-independent and spin-dependent LSP-proton scattering cross sections compatible with all the considered collider, astrophysical, including the CDM direct detection bounds reported in [120][121][122][123][124][125][126], and flavour physics constraints are shown. The sudden suppressions in the direct detection cross sections occur at points for which there are cancellations from the neutralino-neutralino-Higgs interaction terms [167].
The second-row of plots in Fig. 4 show the DM direct detection limits used for the global fits. Regions above the contour lines are excluded at 95% C.L.. Possible sensitivity of the experiments' future upgrades are also shown (XENONnT and PICO-500). These should probe better the pNMSSM parameter space, although the situation depends very much on the neutralino CDM relic density. Under-production of the relic density makes the direct detection limits less constraining. For instance the regions above the blue/dots line in Fig. 4 (first-row left) would have been excluded. They were not excluded because the relic densities at those points are much less than the experimentally measured central value around 0.1 as shown in Fig. 4 (third/last plot).

Conclusions
The nested sampling technique [38] implementation in PolyChord [41,42] has been applied for computing the Bayesian evidence within a 26-parameters pNMSSM. The evidence values are based on limits from collider, astrophysical bounds on dark matter relic density and direct detection cross sections, and low-energy observables such as muon anomalous magnetic moment and flavour physics observables. These were used for comparing between three pNMSSM hypotheses: • H 0 : The scenario for which the observed scalar around 125GeV is identified as the lightest CP-even Higgs boson, h 1 . m h 1 were allowed according to a Gaussian distribution with 3 GeV standard deviation. • H 1 : This is the same as H 0 but with the restriction that m h 2 be within 122-128 GeV.
• H 2 : The scenario for which the observed scalar around 125 GeV is identified as the second lightest Higgs boson, h 2 .
Using the Jeffreys' scale for interpreting the evidence values (see Table 1), the analyses indicate that H 2 could be considered as being ranked first, with a Moderate support, amongst the three hypotheses. H 0 and H 1 can be considered as ranked second at the same time. That is, the current data used for the Bayesian comparisons favours the hypothesis with the possibility of having an additional but much lighter (than 125 GeV) scalar. The lightest scalar within H 2 turned out to have mass distribution centred below the W-boson mass as shown in Fig. 1. Due to the "sharing of couplings" effect, h 2 is SM-like while h 1 is dominantly singlet-like. From the posterior distributions presented, pNMSSM benchmark points could be constructed for further analyses with regards to non SM-like Higgs bosons. For instance, consider the two-dimensional posterior distribution on the (λ, κ) plane shown in Fig. 5. For the H 2 hypothesis, the plane is well constrained and approximately reduced to a line. The model points along the line could be excellent benchmarks for testing non-SM Higgs scenarios.
Other possible directions for further investigations can be described as follows. In this article, the composition of the LSP was not fixed. The only requirement was that the relic density and the elastic cross section with nucleons be within the experimentally allowed range. One can go beyond this by demanding, a priori, a particular LSP composition. That is, one could require, in addition to what the masses of the light CP-even Higgs bosons could be, that only a specific LSP composition be allowed during the pNMSSM parameter space explorations. Next, there are experimental measurements which could possibly probe, in a better way, the pseudo-degenerate Higgs scenario. These include the precise determination of the Higgs boson's total decay width. An update of the analyses presented here to include these ideas could shed more light about the pseudo-degenerate Higgs scenario.
There are caveats within our analyses. One is concerning the uncertainty of the Higgs mass prediction. Here we have used the traditional, and possibly too optimistic uncertainty of 3GeV. A more careful analysis and systematic treatment of the uncertainties, see e.g. [168,169], could significantly impact the Bayesian evidence values. This is also the case should naturalness priors be used for the analyses. In [37], it was shown that imposing naturalness requirements significantly affect the evidence. The correlations among the various observables were not included. Whenever available, the inclusion of correlations could possibly alter the pNMSSM posteriors. For instance, the measurements of the Higgs boson mass and couplings could come from a single experiment and therefore likely to be correlated. Finally, the inclusion of SUSY limits, such as in SmodelS, during the fits and using logarithmic priors should lead to more robust conclusions about the pNMSSM.
Acknowledgements Thanks to W. Handley for support in using Poly-Chord; U. Ellwanger, C. Hugonie, and J. Bernon for NMSSMTools related issues; L. Aparicio, M.E. Cabrera and F. Quevedo for discussions about the NMSSM; and to A. Gatti for reading and suggestions for improving the manuscript. With support from F. Quevedo, this work was performed using ICTP's ARGO and the University of Cambridge's Darwin HPC facilities. The latter is provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England and funding from the Science and Technology Facilities Council.

Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: The datasets generated and/or analysed during the current study are not publicly available due to their size but are available from the corresponding author on request.].
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 .