A novel scenario in the semi-constrained NMSSM

In this work, we develop a novel efficient scan method, combining the Heuristically Search (HS) and the Generative Adversarial Network (GAN), where the HS can shift marginal samples to perfect samples, and the GAN can generate a huge amount of recommended samples from noise in a short time. With this efficient method, we find a new scenario in the semi-constrained Next-to Minimal Supersymmetric Standard Model (scNMSSM), or NMSSM with non-universal Higgs masses. In this scenario, (i) Both muon g-2 and right relic density can be satisfied, along with the high mass bound of gluino, etc. As far as we know, that had not been realized in the scNMSSM before this work. (ii) With the right relic density, the lightest neutralinos are singlino-dominated, and can be as light as 0-12 GeV. (iii) The future direct detections XENONnT and LUX-ZEPLIN (LZ-7 2T) can give strong constraints to this scenario. (iv) The current indirect constraints to Higgs invisible decay h2→χ˜10χ˜10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {h}_2\to {\tilde{\chi}}_1^0{\tilde{\chi}}_1^0 $$\end{document} are weak, but the direct detection of Higgs invisible decay at the future HL-LHC may cover half of the samples, and that of the CEPC may cover most. (v) The branching ratio of Higgs exotic decay h2→ h1h1, a1a1 can be over 20 percent, while their contributions h2→4χ˜10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \left({h}_2\to 4{\tilde{\chi}}_1^0\right) $$\end{document} to the invisible decay are very small.


Introduction
Higgs boson was discovered in 2012 [1,2], and its production rate in most channels coincides with the Standard Model (SM) prediction considering uncertainties [3][4][5]. While there are still chances for physics beyond the SM. For example, for the branching ratio of Higgs boson invisible decay, the current excluding limits are only 26% by ATLAS [6] and 19% by CMS [7], with all data at Run I and data of about 36 fb −1 at Run II.
Supersymmetry is a popular theory beyond the SM, which introduces a new internal symmetry between fermions and bosons. Thus the large hierarchy problem can be solved, gauge coupling can be unified, and dark matter (DM) candidates can be provided, etc. In the Minimal Supersymmetric Standard Model (MSSM) with 7 free parameters at the electroweak scale, a SM-like 125 GeV Higgs can be afforded, but need large fine-tuning, and the branching ratio of Higgs boson invisible decay can be about 10% at most [8][9][10]. The Next-to Minimal Supersymmetric Standard Model (NMSSM) with Z 3 symmetry extends the MSSM by a complex singlet superfieldŜ, but introduces four more parameters. In the graceful and simple model of fully-constrained NMSSM (cNMSSM), all Higgs and sfermion masses are assumed to be unified at the Grand Unified theoretical (GUT) scale, thus only four parameters at GUT scale are left free [11][12][13][14][15][16][17][18][19]. These four or five parameters run according to the Renormalization Group Equations (RGEs), forming the spectrum of NMSSM at low energy scale. While it was found that when considering all the constraints including muon g-2, the SM-like Higgs mass can not reach to 125 GeV in the cNMSSM 1 [11,12], like these in the CMSSM, NUHM1 and NUHM2 [21][22][23][24].
JHEP06(2020)078 JHEP06(2020)078 where the hats are used for superfields, y u,d,e stand for corresponding Yukawa couplings, and λ, κ are dimensionless coupling constants. When the singlet superfieldŜ gets a vacuum expectation value (VEV), S = v s , a effective µ-term is generated dynamically from the term λŜĤ u ·Ĥ d , with µ eff = λv s . (2.2) For convenience, in the following we refer to µ eff as µ. And the VEVs of the two doublet Higgs superfieldsĤ u andĤ d are v u and v d respectively, where v 2 u + v 2 d = v 2 = (174 GeV) 2 . The soft SUSY breaking terms in the NMSSM are only different from the MSSM in several terms: where the S, H u and H d are the scalar components of the superfields respectively, the m 2 S is the soft SUSY breaking mass for single field S, and the trilinear coupling constants A λ and A κ have mass dimension. Unlike that in the CNMSSM or CMSSM, in the scNMSSM the Higgs sector is assumed to be non-universal at the GUT scale. Then, at the GUT scale, the Higgs soft mass m 2 Hu ,m 2 H d and m 2 S are allowed to be different from M 2 0 + µ 2 , and the trilinear couplings A λ , A κ can be different from A 0 . Hence, in the scNMSSM, the complete parameter sector can be usually chosen as at the GUT scale. While the parameters at low energy scale can be calculated in the RGEs running from these GUT-scale parameters.

The Higgs and electroweakinos sector of the scNMSSM
When the electroweak symmetry broken, the scalar component of superfieldsĤ u ,Ĥ d and S can be written as
In the basis (S 1 , S 2 , S 3 ), the CP-even Higgs boson mass matrix M 2 S is given by [45,46] where M A is the mass scale of new doublet with is the geometric average of the two stop masses and A t is the trilinear parameter associated with the Yukawa coupling of top quark y t = m t /v. To have the SM-like Higgs at about 125 GeV, with tan β 1 and λ 1 the loop correction ∆M S, 22 need to be about (86 GeV) 2 , which means heavy stops (M S ∼ 10 TeV), or large stop mixing parameter A t .

The Heuristically Search (HS)
Usually, We divide the samples into 2 categories according to whether or not the samples passed all constraints. A sample that violated several constraints might be not good enough, but there is a chance that we can lead it to become a good sample. In our case, we first leave aside the dark matter and muon g-2 constraints, only imposing other constraints in the NMSSMTools. A sample that passes other constraints will get a score to evaluate how much it violates the dark matter and muon g-2 constraints, and we call it a 'marginal sample'.
In table 1, we classify the samples into 3 types: the bad, marginal and perfect samples. For marginal and perfect samples, they will get a score to value how much they violate the constraints. And we try to shift these marginal samples to satisfy the dark matter and muon g-2 constraints, becoming perfect samples. The score function is given as: When the score is large, it means the marginal sample violates the experiments more; while when the score is zero, it means the marginal sample becomes a perfect sample, and satisfies all constraints very well, including dark matter and muon g-2 constraints.
In algorithm 1, we give the Heuristically Search algorithm, which can shift a marginal sample to a perfect sample satisfying all constraints. With a marginal sample, X, we search around it and try to find another marginal sample with a smaller score. Then we repeat the process, until we meet a perfect sample whose score is zero, or get failed.
The search can be successful or get failed. Most of the time in our case, the Heuristically Search can lead about 80% (even over 94%) marginal samples to perfect samples. Meanwhile, to avoid the program being trapped in a local minimum, we give it a chance to give up. During the search, if the search step is larger than the maximum step N max (we set it to 20), or the number of tries in one step is larger than the maximum number, T max (we set it to 50), we stop the program and the search gets failed.
To get a new marginal sample X around the X, we can treat each component x i (i = 1 . . . 9) independently. The simplest way is choosing samples around the x i within radius r i with uniform distribution. To improve the efficiency, the Gaussian distribution is adopted, since it has some chance to search samples far away and could jump out of the local minimums. The Gaussian distribution function of x i is given as: where r i (we set it to 1/50) is an important parameter and determines the search efficient. Actually, r i can change with the score. When the score is nearly zero, it means that a perfect sample is nearby, and then r i can change to a smaller one and vice versa.

The Generative Adversarial Network (GAN)
The Generative Adversarial Network (GAN) is a Generative model. It can generate samples with similar distribution as the real data. There are two neural networks in GAN. One is the Generator G, which can generate fake samples. While the other is the Discriminator

JHEP06(2020)078
Input : A marginal sample, X; Output: Find a perfect sample X passed all constraints, or failed; 1 initial step = 0 and try = 0; 2 score ← f (X); 3 while step < N max and try < T max and score = 0 do 4 get a new marginal sample X around the X within radius r; if score < score then 7 X ← X ; 8 score ← score ; D, which can classify the generated samples into real samples and the fake samples, so it is actually a binary classifier.
When the GAN is being trained, the Discriminator D tries to classify the generated samples into real and fake samples, meanwhile the Generator G tries to fool the Discriminator D and generate almost 'real' samples. After training, the Generator G and Discriminator D arrive at a Nash equilibrium. Then we can use the Generator G to generate 'real' samples as many as we need. And these 'real' samples actually have similar distribution as the real samples coming from the training dataset.
In this work, we use the Artificial Neural Networks to build the Generator G and the Discriminator D. We adopt a simple Neural Network with 3 hidden layers and each layer with 50 neurons, and the Activation Function is Leaky ReLU. Furthermore, we train our GAN with algorithm 2. In our case, we choose k = 3, n = 1, m = 20000, and the training iterations as 2000, while for the Gradient descent we use Adadelta [49].
During the training, we require the Generator to learn the general distribution of the real data, but not try hard to find perfect hyperparameters, since we need the Generator to have more creativity. As a complement, we combine GAN with the HS. The Generator generates lots of samples, and some of them might be marginal samples, while the HS program will try to lead these marginal samples to perfect samples.

Results and discussions
To satisfy all the constraints including muon g-2, dark matter, Higgs data, gluino and other SUSY search results, and try to get right dark matter relic density and large Higgs invisible decay, we consider following parameter space in the scNMSSM:

Scan with HS and GAN
We developed the Heuristically Search program based on NMSSMTools-5.5.2 [50][51][52][53]. During the scan, we first require the samples satisfying the following other basic constraints: • Theoretical constraints of vacuum stability, and without Landau pole below M GUT [50][51][52].
• The lower mass bounds of charginos and sleptons from the LEP: JHEP06(2020)078 lower limit upper limit The DM relic density Ωh 2 None 0.131 The spin-independent DM-nucleon cross section None XENON1T The spin-dependent DM-neutron cross section None LUX and XENON1T The spin-dependent DM-proton cross section None LUX, XENON1T and PICO-60 Muon g-2 δa µ 8.8 × 10 −10 46 × 10 −10 Table 2. The upper and lower bounds of the dark matter and muon g-2 observables.
• To study Higgs invisible decay we require the mass ofχ 0 1 lighter than half of the SM-like Higgs, Then for the marginal samples, we consider the constraints of dark matter and muon g-2, calculating the score in eq. (2.26) for each sample. The upper and lower bounds of these observables are given in table 2. The detail experimental constraints we consider in this work are list as following: • The DM relic density Ωh 2 from WMAP/Planck [54,60,61], we only take upper bound Ωh 2 ≤ 0.131, considering there may be other sources of DM that contribute to Ωh 2 ; where the dark matter observables are calculated by micrOMEGAs 5.0 [62][63][64][65] inside NMSSMTools.
If a sample satisfies the basic constraints (not including DM and muon g-2), it will get a score as a marginal or perfect sample; otherwise, it will be discarded. Then with JHEP06(2020)078 the HS program, we did our first scan. We randomly searched for marginal samples in the parameter space, and then used the HS program changing them into perfect samples. In the first search, we got about 10k perfect samples in 24 hours. 2 In fact if we changed the random scan into a multi-path Markov Chain Monte Carlo (MCMC) scan, the scan would be more efficient.
In figure 1, we show the score of marginal samples in the M 0 versus M 1/2 plane. Notice that if the score equal to zero, the marginal sample is also a perfect sample. We can see that the area of marginal samples (colored range) is much larger than the perfect samples (black range) which get a zero score (satisfying all above constraints, including the DM and muon g-2). Besides we also show five tries, that the HS program tries to shift marginal samples to perfect samples, where four get success (solid lines) and one gets failure (dashed line). As the successful tries showed, the Heuristically Search usually needs less than 10 steps to shift a marginal sample to a perfect sample. In fact, many marginal samples need only several steps to change into perfect samples, while the direct search for perfect samples will waste much more time. That is the reason why we developed the HS program.
After the first search, all of the 10k perfect samples are used as the training set for the GAN. Then we trained the GAN according to algorithm 2. With a well-trained GAN, 3 JHEP06(2020)078 we can transform random noises to recommended samples that have similar distribution as the training data. Then we can easily get millions of recommended samples from the GAN in a few seconds.
In figure 2, we show the training set in the upper panels, and the recommended samples from GAN in the lower panels. We can see that the GAN has already learned the general distribution of the perfect samples in the training set. While the recommended samples from GAN (in the lower panels) have some creativity, which is not totally identical to the training set (in the upper panels). The well-trained GAN can exploit the parameter space and recommend samples around the training samples, which is exactly what we need.
We used the trained GAN to generate 2000k recommended samples, 4 and passed these recommended samples to the HS program. Then we got 280k perfect samples within 30 hours, 5 such a way is much faster than the traditional parameter scan. At last, we impose the following additional constraints: • The upper limit of Higgs invisible decay, 19%, given by the CMS collaboration [7]. 4 Less than 1 minute on the computer with CPU: I5 6600K, GPU: GTX 1660 super. 5 We used 40 threads parallel running on Intel(R) Xeon(R) CPU E7-4830 v3 @ 2.10GHz.  • The low-and high-mass resonances search results at the LEP, Tevatron and LHC, which are implemented inside HiggsBounds-5.5.0 [83][84][85][86][87].

JHEP06(2020)078
Finally, after all the scans and constraints, we got about 88k surviving samples. In figure 3, we show the nine free parameters of these surviving samples, and the coordinates are the same as those in figure 2. We can see that all M 1/2 are larger than 1200 GeV. The reason is that we imposed the additional constraints, especially the high mass bound of gluino and the first-two-generation squarks at the LHC in eq. (3.8).
Comparing figure 3 with the lower panels in figure 2, we can see that the recommended samples from GAN are changed to perfect samples by HS program. While comparing figure 3 with the upper plane in figure 2, we can see that the GAN has recommended many marginal samples that we need, and it does have some creativity to recommend samples around the training samples. So, the combination of HS and GAN is very crucial.

Light dark matter (DM) and Higgs invisible decay
In figure 4 we show the final surviving samples in the plane of κ vs λ, with colors indicate the masses of the lightest neutralinoχ 0 1 , the lightest CP-even Higgs h 1 and the light CPodd Higgs a 1 respectively. For the surviving samples, we checked that the lightest CP-even Higgs h 1 are all highly singlet-dominated, and the next-to-lightest CP-even Higgs h 2 is the SM-like Higgs of 125 GeV. Since we need the SM-like Higgs have a chance decaying to JHEP06(2020)078 Since we set the parameter µ from 100 to 200 GeV, we have Thus it is and we checked that theχ 0 1 are singlino-dominated for samples between the two dash line. We can also see that for the samples between the two dash lines, h 1 and a 1 are also possibly lighter than m h 2 /2.
In figure 5 we show the properties of dark matter in the scNMSSM. In the lower panels, the spin-independent dark matter and nucleon scattering cross section σ SI have been rescaled by a ratio of Ω/Ω 0 , where the Ω 0 is the right dark matter relic density with Ω 0 h 2 = 0.1187. As seen from these panels, the samples with right relic density can be divided into three cases:

JHEP06(2020)078
• From the upper right panel, there is a special relationship between the mass of h 1 , a 1 andχ 0 1 . For the samples with right DM relic density in Case I and Case II, the LSP χ 0 1 is highly singlino-dominated, and with small λ, κ and a sizable tan β. Combining with eq. (2.25), we can see the two ellipse arcs: Case II : • From the lower-left panel, most samples predict spin-independent DM-nucleon cross section σ SI not far below the bound from XENON1T 2018, and can be covered by future LZ and XENONnT experiments. Thus these two future direct detections are crucial to check the parameter space of the scNMMSM. But there are still some samples that can escape from these future detections, and also can predict right relic density. Besides, there are also some samples below the neutrino floor, although most of them do not predict sufficient DM relic density.
• From the lower right panel, samples with large Higgs invisible decay branching ratio, Br(h 2 →χ 0 1χ 0 1 ) > 10%, have a sizable LSP mass, mχ0 1 > 30 GeV. This is because the small LSP mass, mχ0 1 < 30 GeV, always accompanying with a small h 1 and a 1 mass, which can be seen from the upper right panel of figure 4. Then the exotic decay channels h 2 → h 1 h 1 and h 2 → a 1 a 1 will open, which can be seen in figure 6. The Higgs invisible decay branching ratio Br(h 2 →χ 0 1χ 0 1 ) become smaller.
• From the lower right panel, most samples which have large Higgs invisible decay branching ratio, Br(h 2 →χ 0 1χ 0 1 ) > 10%, could be covered by future LZ and XENONnT detections. But there are still some samples that can escape from these future experiments, and also can have large Higgs invisible decay branching ratio. And there are also some samples below the neutrino floor, some of them can have large Higgs invisible decay branching ratio Br(h 2 →χ 0 1χ 0 1 ) > 10%. In figure 6, we show the decay information of the SM-like Higgs h 2 . From this figure, we can see that all of the branching ratios of h 2 →χ 0 1χ 0 1 , h 1 h 1 , a 1 a 1 can be at most about 20%. While we checked that considering in addition that of h 2 decay to 4χ 0 1 though a 1 /h 1 →χ 0 1χ 0 1 , which acquire mχ0 1 < m h 2 /4 31 GeV, the branching ratio of Higgs invisible decay increase very little compared with only that h 2 decay to twoχ 0 1 though h 2 →χ 0 1χ 0 1 . The upper limit of Higgs invisible decay branching ratio is about 19% at Run II of the LHC, while the future detections for that can reach to 5.6%, 0.24%, 0.5% and 0.26% according to HL-LHC [89], CEPC [90], FCC [91] and ILC [92] respectively.
• For samples with h 2 /Z-funnel dark matter, mχ0 1 m Z,h 2 , the branching ratio of Higgs boson invisible decay can be large or small depending on the parameter λ.
• For most samples with low-mass LSP, mχ0 1 < 20 GeV, the branching ratio of Higgs boson invisible decay is small and beyond the ability of HL-LHC, while the Br(h 2 → h 1 h 1 ) can be larger than the Br(h 2 → a 1 a 1 ).
• Though the detection of Higgs invisible decay, about half of the surviving samples can be covered at the future HL-LHC, while the future CEPC can cover most.
In addition, we list some discussions on other related topics in this scenario: • We had performed a work on the annihilating mechanisms of light dark matter in this scenario [93], where we found that all the samples have the LSP in funnel mechanisms. When the LSP is lighter than 20 GeV, it is in h 1 -or a 1 -funnel mechanism, that is 2mχ0 • Higgsbounds has been used to constrain heavy Higgs bosons. We also checked that the heavy bosons h 3 and a 2 are at 2.4 ∼ 4.8 TeV, and their branching ratios to τ pairs are 8% at most. The masses are not covered in ref. [94], and the production rates are much smaller than the upper limits in ref. [95]. Furthermore, we are ongoing a new work on the heavy Higgs bosons, especially on how to probe them at the future 100-TeV hadronic collider.
• We again checked the spin-dependent cross sections, and show them in figure 7. As can be seen from it, both the DM-proton and DM-neutron cross sections satisfy the current constraints. When the LSP density Ωh 2 is sufficient, the upper limit is satisfied directly; while when the LSP density Ωh 2 is insufficient considering there JHEP06(2020)078

JHEP06(2020)078
may be other sources of dark matter, the upper limit is satisfied by rescaling the cross section by a factor Ω/Ω 0 , which is the ratio of LSPχ 0 1 in current dark matter.
• We also checked muon g-2, and show δa µ , the central value of SUSY (including Higgses) contribution, in figure 8. When imposing the constraint, we also consider the error in SUSY-contribution calculation, which is about 1.5 × 10 −10 , thus all the samples can satisfy the experimental result at 2σ level. We also noticed that, the large M 1/2 values are caused by the high mass bounds of gluino and squarks in the first two generations, and this in return cause heavy wino-like chargino and bino-like neutralino, thus the SUSY contribution δa µ cannot increase more.

Conclusions
In this work, we develop a novel scan method, combining the Heuristically Search (HS) and the Generative Adversarial Network (GAN). The HS can shift marginal samples to perfect samples, and the GAN can generate recommended samples as many as we need from noise. In our specific process, we first scan the parameter space randomly with NMSSMTools under basic constraints, generating marginal samples; then the HS try to shift the marginal samples to perfect samples satisfying in addition the dark matter and muon g-2 constraints; with these randomly-generated perfect samples, the GAN is trained, and then generates a huge amount of recommended samples in a short time; again the HS try to shift the recommended samples to perfect samples; finally, we check the final perfect samples with additional constraints including these of sparticle searches, Higgs searches and Higgs invisible decay, getting the final surviving samples.
With this efficient method, we find a new scenario in the semi-constrained Next-to Minimal Supersymmetric Standard Model (scNMSSM), or NMSSM with non-universal Higgs masses. In this scenario, • Both muon g-2 and right relic density can be satisfied, along with the high mass bound of gluino, etc. As far as we know, that had not been realized in the scNMSSM before this work.
• With the right relic density, the lightest neutralinos are singlino-dominated, and can be as light as 0-12 GeV.
• The future direct detections XENONnT and LUX-ZEPLIN (LZ-7 2T) can give strong constraints to this scenario.
• The current indirect constraints to Higgs invisible decay h 2 →χ 0 1χ 0 1 are weak, but the direct detection of Higgs invisible decay at the future HL-LHC may cover half of the samples, and that of the CEPC may cover most.
• The branching ratio of Higgs exotic decay h 2 → h 1 h 1 , a 1 a 1 can be over 20 percent, while their contributions (h 2 → 4χ 0 1 ) to the invisible decay are very small.