SCYNet: testing supersymmetric models at the LHC with neural networks

SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model.


Introduction
Direct searches for new particles at the LHC are among the most sensitive probes of beyond the Standard Model (BSM) physics and play a crucial role in global BSM fits. Calculating the profile likelihood ratio (referred to as χ 2 in the following) for a new physics model from LHC searches is a bechtle@physik.uni-bonn.de b sbelkner@th.physik.uni-bonn.de c daniel.dercks@desy.de d Matthias.Hamer@cern.ch e tim.keller@rwth-aachen.de f mkraemer@physik.rwth-aachen.de g sarrazin@physik.rwth-aachen.de h schuette@physik.rwth-aachen.de i tattersall@physik.rwth-aachen.de straightforward in principle: for each point in the model parameter space, signal events are generated using a Monte-Carlo simulation. The χ 2 is then calculated from the number of expected signal events, the Standard Model background estimate and the number of observed events for a given experimental signal region. The computation time for such simulations, can be overwhelming however, especially when testing BSM scenarios with many model parameters. Global fits of supersymmetric (SUSY) models, for example, are typically based on the evaluation of O(10 9 ) parameter points, see e.g. [1][2][3], and the required Monte-Carlo statistics for estimating the number of signal events for each parameter point requires up to several hours of CPU time. In this study we have attempted to provide a fast evaluation of the LHC χ 2 for generic SUSY models by utilizing neural network regression.
Global SUSY analyses which combine low-energy precision observables, like the anomalous magnetic moment of the muon, and LHC searches for new particles strongly disfavour minimal SUSY models, like the constrained Minimal Supersymmetric Standard Model (cMSSM) [1]. Thus, more general supersymmetric models have to be explored, including for example the phenomenological MSSM (pMSSM-11) [4], specified by eleven SUSY parameters defined at the electroweak scale. The pMSSM-11 allows to accommodate the anomalous magnetic moment of the muon, the dark matter relic density and the LHC limits from direct searches. However, the large number of model parameters poses a significant challenge for global pMSSM-11 fits.
In this paper we introduce the SCYNet tool for the accurate and fast statistical evaluation of LHC search limits -and potential signals -within the pMSSM-11 and other SUSY models. SCYNet is based on the results of a simulation of new physics models that used CheckMATE 2.0 [5][6][7], and the subsequently calculated χ 2 from event counts in the signal regions of a large number of LHC searches. The estimate has been used as an input to train a neural network based regression; the resulting neural network then provides a very fast and reliable evaluation of the LHC search results, which can be used as an input to global fits.
Neural networks allow predictions of complex data by overlapping non-linear functions in an efficient manner, and are straightforward to implement using open source libraries like Tensorflow [8], Theano [9] and Keras [10]. Previous studies [11][12][13] have used neural networks (and other machine learning techniques) mainly as a classifier between allowed and excluded models. Going beyond this simple classification, recently Gaussian processes [14] were used to predict the number of signal events for a simple SUSY model [15]. Here, we have applied neural networks as a regression tool to predict χ 2 values for an ensemble of many signal regions. Compared to other regression methods, neural networks excel when dealing with sparse sample sets due to their non-linear nature. They can thus be valuable tools for predicting LHC χ 2 values of complex BSM models.
In order to train and validate the neural network regression we have considered the pMSSM-11 as an example of a complex and phenomenologically highly relevant SUSY model. Two different approaches have been explored: in the so-called direct approach we have simply used the eleven parameters of the pMSSM-11 as input to the regression, c.f. Figure 1a. The resulting neural network evaluates the χ 2 of the pMSSM-11 within milliseconds and can thus be used in global pMSSM-11 fits without time penalty. A second, so-called reparametrized, approach uses SUSY particle masses, cross sections and branching ratios to first estimate signature-based objects, such as particle energies and particle multiplicities, which should relate more closely to the LHC χ 2 values (Figure 1b). While the calculation of the particle energies and multiplicities requires extra computa-tion time, the corresponding neural network should be more general and be able to predict the χ 2 for a wider class of SUSY models. To explore this feature, we have examined how well a reparametrized neural network trained on the pMSSM-11 can predict the χ 2 for two other popular SUSY models, namely the cMSSM and a model with anomaly mediated SUSY breaking (AMSB) [16,17].
This paper is structured as follows: In section 2 we provide details of the LHC event generation within the pMSSM-11, and the method we have used to calculate a global LHC χ 2 . The training and validation of the direct and reparametrized neural net approaches are presented in 3 and 4, respectively. The results are summarized and our conclusions presented in section 5. More details on the statistical approach are given in the appendices.

Event generation and LHC χ calculation
For the training and testing of the neural networks we have calculated the LHC χ 2 for a set of pMSSM-11 parameter points using CheckMATE. This χ 2 CM calculation (the subscript "CM" denotes the CheckMATE result) required the generation of SUSY signal events, a detector simulation and the evaluation of the LHC analyses. In this section we describe the simulation chain, list the LHC analyses we include, and explain our calculation of χ 2 CM from the event counts in the various experimental signal regions. A graphical depiction of this process can be found in Figure 2.
The pMSSM-11 is based on eleven SUSY parameters: the bino, wino and gluino mass parameters, M 1 , M 2 , M 3 , the scalar quark and lepton masses of the 1st/2nd and 3rd generation, respectively, M(Q 12 ), M(Q 3 ), M(L 12 ), M(L 3 ), the mass of the pseudoscalar Higgs, M(A), the trilinear coupling of the 3rd generation, A, the Higgsino mass param-  eter, µ, and the ratio of the two vacuum expectation values, tan(β ) [4]. The ranges of the pMSSM-11 parameters we have considered are specified in Table 1.
We have sampled pMSSM-11 points using both a uniform and a Gaussian random probability distribution. The maximum and standard deviation of the Gaussian distribution have been chosen to be one fourth and one half of the parameter range, respectively. 1 Consequently we have generated less points near the edges of the parameter range, and in the decoupling regime of large pMSSM-11 parameters, where the SUSY spectra are beyond the LHC reach.
After we have generated a pMSSM-11 parameter point we have used SPheno-3.3.8 [19] to calculate the spectrum. Following the SPA convention [18], all 11 parameters were defined at a scale of 1 TeV. We have then only proceeded in the simulation chain if the following preselection criteria have been fulfilled: -There were no tachyons in the spectrum.
-The χ 0 1 was the LSP, such that it was the dark matter candidate.
The preselection criteria were applied in order to restrict the SUSY parameter space to phenomenologically viable regions. Consequently, the neural networks that were trained on that parameter space will only be valid if the preselection criteria are fulfilled. Note that the anomalous magnetic moment of the muon, (g − 2) µ , has not been included in the preselection criteria, because in a global fit one might not wish to include this observable.

Event generation
If all pre-selection criteria were fulfilled we then generated LHC signal events for 8 and 13 TeV. At 8 TeV we simulated the production of both electroweak and strongly interacting SUSY particles, including all possible 2 → 2 scattering processes. The 8 TeV analyses were included since no 13 TeV electroweak searches were available at the time the event generation for this work was started. For the simulation of the electroweak events, which include all combinations of slepton, chargino and neutralino final states, we  Table 2: Precision observables used to constrain the parameter space. Note that a newer and slightly more precise measurement BR(B s → µ + µ − ) = (2.8 +0.7 −0.6 ) · 10 −9 is now available but was not included in the preselection [25].
At 13 TeV, only processes with strongly interacting SUSY particles were simulated, since all analyses which were available within CheckMATE target these final states. We have again used MadGraph5_aMC@NLO 5.2.4 [45] to generate events and Pythia 6.248 [47] was used to decay and shower the final state. The cross sections were normalized to NLO accuracy as obtained from NLL-Fast 2.1 [51][52][53][54][55][56][57][58]. One additional parton was generated at the matrix element level and then matched to the parton shower if the mass gap between either the lightest squark or the gluino and the LSP was below 300 GeV. Such a procedure allowed for the ac-curate determination of the signal acceptance in compressed spectra where initial state radiation is crucial to pass the experimental cuts [59,60].
For 8 TeV we calculated χ 2 CM for a total of 210000 pMSSM-11 parameter points and at 13 TeV we simulated 140000 parameter points. To guarantee a sufficient statistical accuracy, we have generated between 1000 and 45000 events per particle production process, depending in detail on the expected number of signal events. For each pMSSM-11 parameter point we have simulated on average 20000 events, which required approximately 380000 CPU hours in total.
Once the event generation was completed, the event files were passed through CheckMATE 2 [5][6][7] which contains a tuned Delphes-3 [66] detector simulation with separate setups for 8 and 13 TeV. The analyses which have been used to train the SCYNet neural network regression are listed in Table 3.

LHC χ 2 calculation with CheckMATE
We have implemented a calculation of χ 2 CM that approximates to a likelihood, based on the CheckMATE output of the event count in each signal region (SR). In the following we will use χ 2 CM, jk to denote the χ 2 of analysis j and SR k from CheckMATE. The exact statistical prescription we have used to calculate χ 2 CM, jk for each single SR is described in Appendix A.
The individual signal regions were combined to give a global χ 2 CM with a procedure that combined the most sensitive (expected) orthogonal SRs. Our algorithm chose these by first dividing the SRs in each analysis into orthogonal disjoint groups. Each disjoint group can contain several SRs and the SRs of one disjoint group are disjoint to all SRs in all other disjoint groups, see Figure 3. If one disjoint group contains more than one SR, the SR with the largest signal/S 95 exp ratio in this group was selected. For one analysis we then added the χ 2 s of all selected SRs, χ 2 CM, j := ∑ selected k χ 2 CM, jk . In the last step all the χ 2 CM, j from the individual analyses have been added to give the overall χ 2 CM := ∑ j χ 2 j . All the analyses we have combined for the same centre-of-mass energy target different final state topologies, and we have therefore assumed that the SRs of different analyses were to a very good approximation disjoint. In the case of the 8 (13) TeV analyses we found 47 (65) disjoint SRs for the group with the largest sensitivity.
Since the experimental collaborations do not provide information on the correlation of the systematic errors, we had to assume the uncertainties were Gaussian distributed and uncorrelated. Figures 4a and 4b show the distributions of χ 2 CM for all sampled points. Two peaks can be observed in both plots: the first peak, called the zero signal peak, is located at around χ 2 CM ≈ 40 (8 TeV, left plot) and χ 2 CM ≈ 54 (13 TeV, right plot), respectively. All χ 2 CM values in the zero signal peaks belong to pMSSM-11 parameter points with very heavy SUSY spectra and thus very little or no expected signal in any of the SRs. A second peak occurs at χ 2 = 100 because we set all χ 2 CM > 100 to this value. This was done to avoid the neural networks learning any structures at large χ 2 CM > 100, where the model is already clearly excluded by the LHC data.
The three regions labeled I, II and III outside of the zero signal peak and the peak at χ 2 CM = 100 in Figures 4a and 4b are called rare target ranges. We differentiated between regions below (I) and above (II and III) the zero signal peak and between regions with χ 2 CM closer to the minimum (I and II), since these are much more important for global fits than those far away, and thus need better modeling. We have evaluated the performance of the SCYNet neural network separately for these different regions.

Neural network regression: direct approach
In this section the direct approach for neural network regression used in SCYNet has been described. The inputs to the network were the 11 parameters of the pMSSM-11 and the output, χ 2 SN , is an estimate of χ 2 CM for the LHC searches described in section 2. We have trained separate networks for the 8 and 13 TeV analyses, with outputs χ 2 SN,8 TeV and χ 2 SN,13 TeV , respectively. In this section, the setup and performance of both networks has been discussed, but we have focused on the results obtained for χ 2 SN,8 TeV for illustration.

Parameters of the neural networks
All neural networks used in this work were of the simple feed forward type 3 and have been designed using the Tensorflow [8] library. We performed scans of the neural network parameters (so-called hyperparameters) to find the best configuration. The hyperparameters, which we were trying to optimize, were the number of hidden layers, the number of neurons in the hidden layers, the dropout probability of neurons in the hidden layers, the batch size, the learning rate of the Adam minimization algorithm [67], the activation function in the output neuron, the regularization parameter and the type of cost function. The networks discussed in this section always refer to the hyperparameter configuration that was found to function best. 4 Our 8 TeV network had four hidden layers l = 1, . . . , 4, each with N l = 300 neurons and all neurons have hyperbolic 4 The optimal hyperparameter configuration is the one which produces the smallest average error between χ 2 SN and χ 2 CM on the validation set. The results of the hyperparameter optimization are independent on the χ 2 CM range considered. To prevent overfitting of the hyperparameters, for each scanned hyperparameter we choose a random validation set. tangent activation functions. The weights in layer l were initialized with a Gaussian distribution with standard deviation 1/ √ N l−1 and mean zero, while the biases were initialized with a Gaussian distribution with standard deviation equal to one and mean equal to zero. In order to train a network with a hyperbolic tangent output activation function, the targets were transformed to the range between −1 and 1 with a modified Z-score normalization (see details in Appendix B).
During the training we used a quadratic cost function and trained with a batch size of 750. We used the Adam minimization algorithm with a learning rate of 0.001 and   all other parameters of the Adam optimizer were set to the default values from [67]. The complete cost function C consists of the quadratic cost function and a quadratic regularization: where N train were the number of points in the training set. We used 10000 validation points (N val = 10000) while the rest of the sampled points were used for training. 5 In the hyperparameter scan we found that λ = 10 −5 gave the best network performance. The 13 TeV network had the same structure as the 8 TeV network but the batch size during training was slightly adjusted to 500, while all other hyperparameters were the same as in the 8 TeV case. 5 No test set was used because in the hyperparameter scan the validation set was chosen randomly for each scanned hyperparameter.

Training the neural networks
The training phase of the 8 TeV neural network has been visualized in Figures 5a and 5b where we have compared the mean error on points in the validation data during the training phase to that of a nearest neighbor interpolator. The following parameter has been used to quantify the network performance, Mean error on valid. points : After the last training epoch the mean error on points in the validation set is 1.68. As shown in Figure 5a this performance is a significant improvement over a nearest neighbor interpolator [68]. and III (70 < χ 2 ≤ 95) were larger than the mean errors in the zero signal range and in the range around 100. This behaviour will be called rare target learning problem (RTLP) in the following and can be understood from the χ 2 CM distribution in Figure 4a and the corresponding discussion in section 2.2: the majority of the scanned pMSSM-11 points led to χ 2 CM values in the range 38 < χ 2 ≤ 42 and 95 < χ 2 , which were thus described more accurately by the neural network than those in regions I, II and III.
In future work the RTLP will be addressed with two strategies: the 11-dimensional probability density function (pdf) in the parameter space of the points in the range 0 < χ 2 ≤ 38 and 42 < χ 2 ≤ 95 will be used to randomly sample new pMSSM-11 points which will be added to the training and validation samples. In the second approach, each point in the RTLP region will be used to seed new random points using a narrow 11-dimensional pdf centered around each point in the above-mentioned χ 2 range. New points can then be generated, simulated and used to improve the network especially in the rare target ranges. Motivated by the profile likelihood requirement, which sets an estimate of ∆ χ 2 = 1 for the 1 σ range, we plan to improve the training and validation set size by subsequent application of these procedures until a mean error of ∆ χ 2 well under 1 is reached.
For the 13 TeV neural network we found similar results to those from the 8 TeV network that has already been displayed in Figure 5a and 5b. The mean errors for the 13 TeV network at the end of the training phase have been given in Table 4. The histogram in Figure 6a shows the difference between the CheckMATE and SCYNet results χ 2 SN − χ 2 CM . Again the comparison to the nearest neighbour interpolator shows that the neural network provides a much more powerful tool to predict the LHC χ 2 . The mean error for the neural network on points in the validation set is 1.45.
The performance of the 13 TeV network is also different for the different χ 2 ranges, see Figure 6b. However, the difference between the mean error in the zero signal range and in the rare target ranges is less pronounced than for the 8 TeV network. This can be understood from comparing Figures 4a and 4b: the 13 TeV LHC analyses are sensitive to a wider range of pMSSM-11 parameters, and thus fewer of the sampled points result in no signal expectation.

Testing the neural networks
In this section we have used an additional, statistically independent validation set of 60000 pMSSM-11 points that passed the preselection criteria to compare the SCYNet prediction to the CheckMATE result. For illustration we have focused on the projection of the 11-dimensional pMSSM parameter space onto the masses of the gluinog and the neutralino χ 0 1 , which are particularly relevant for the χ 2 of the LHC searches. All plots in this section have been given for the 8 TeV case, while similar results were obtained for the 13 TeV case.
In Figure 7a we have presented the minimal χ 2 CM obtained by CheckMATE for the validation set of pMSSM-11 points in bins of the gluino and neutralino masses. The minimum χ 2 in each (m(g), m(χ 0 1 ))-bin typically corresponds to scenarios, where all other SUSY particles, and in particular squarks of the first two generations, were heavy and essentially decoupled from the LHC phenomenology.
In Figure 7b the corresponding result obtained from the SCYNet neural network regression has been given. We found that the neural network reproduces the main features of the χ 2 distribution. We emphasize here that each bin represents a single pMSSM-11 parameter point and conse-  Figure 9. In this plot we have taken the mean difference between both results for all validation points that lie in the respective histogram bin in order to demonstrate in which regions of parameter space the network performs best. We have found overall very good agreement between the CheckMATE and SCYNet result. However, sizeable differences were visible in the compressed regions where m(g) is close to m(χ 0 1 ). This particular region has been probed by monojet searches [35,41] which were sensitive only for very degenerate spectra. Thus the χ 2 contribution peaks suddenly as the mass splitting between the SUSY states is reduced. Unfortunately, the rapid change in χ 2 makes this region difficult for the neural network to learn and will be targeted specifically in future work by generating more training data in this area.
As obvious from Figure 9, the parametrization of χ 2

CM
through SCYNet worked very well on average. However, the pMSSM-11 points still have a rather broad distribution of |χ 2 CM − χ 2 SN |, especially near the crucial transition from the non-sensitive to the sensitive region. After all, this is exactly the region where the rare target learning problem (RTLP) alluded to in section 3.2 occurs. In order to illustrate this, in Figure 10a we have displayed the distribution of χ 2 CM − χ 2 SN in bins of m(χ 0 1 ) for gluinos with a mass between 850 and 900 GeV. In Figure 10b the equivalent result for neutralino masses between 400 and 450 GeV has been given. The mass ranges were chosen such that we catch the transition regions from a low to a high χ 2 CM in the minimum profile plot 7a. We can clearly say that in both cases there is a narrow peak around χ 2 CM − χ 2 SN = 0 and this is also true in the crucial transition regions. However we can also see that the RTLP causes few, but significant outliers, which will be subject to future improvements with targeted training.
We finally note that our results for the pMSSM-11 χ 2 cannot be compared in a straightforward way to exclusions published by the LHC experimental collaborations. ATLAS and CMS have not presented any specific analyses for the pMSSM-11, but typically interpret their searches in terms of SUSY simplified models. The simplified models assume 100 % branching ratios into a specific decay mode, which does not hold in the pMSSM-11. Instead, in the pMSSM-11 there are in general a number of competing decay chains that result in a large variation in the final state produced in different events. As a result, the events do not predominantly fall into the signal region of one particular analysis, but are instead shared between many different analyses. Thus the constraints on the pMSSM-11 are in general weaker than those on simplified SUSY models.

Neural network regression: reparametrized approach
The direct approach of SCYNet discussed in section 3 allows for a successful representation of the pMSSM-11 χ 2 from the LHC searches. However, despite the successful modeling, there is motivation to explore an alternative ansatz. Foremost, the model parameters used as input to the direct approach do not necessarily correlate with the χ 2 CM behaviour. For example, parameters such as A and tan β are not directly linked to any single observable in the signal regions considered here. This separation implies a complex function that has to be learned and modeled by the neural network itself. Another downside of the direct approach is that a neural network trained on the model parameters is inherently model dependent. For every model considered, new training data is required and a new neural network must be trained.
In this section, a different set of phenomenologically motivated parameters was proposed as an input to a new network: The reparametrized network. These input parameters are in principle observables, and more closely related to the χ 2 CM values. Most importantly they are model independent. It was the aim of this ansatz to reach a performance which was at least comparable or better to the performance of the SCYNet direct approach. However, this comes at the cost of an increase in computation time, since for each evaluation, branching fractions and cross sections have to be calculated. The methods used for building and training the neural network were similar to the ones used in the direct approach but here we found that a deeper network of 9 layers performed better.

Reparametrization procedure
A new model is excluded if in addition to the expected Standard Model background a statistically significant excess of events was predicted for a particular signal region which however is not observed in data. These observables generally correspond to a combination of final state particle multiplicity and their corresponding kinematics. For example, if two distinct models produce the same observable final states at the LHC they would both be allowed/excluded independently of the mechanism producing the events. The reparametrized approach aims to calculate neural network inputs that relate more closely to these observables. As displayed in Figure 11 all the allowed 2-to-2 production processes were considered. For each produced particle, the tree of all possible decays with a branching ratio larger than 1% was traversed. At the run time, each occurrence of a final state jet, b-jet, e ± , µ ± , τ ± , / E T as well as intermediate state on-shell W ± ,Z and t/t were counted. Note that the algorithm does not differentiate between the type of particle producing the missing energy since these are always invisible to the detector. Therefore, the set of all invisible particle types (including for example neutrinos or a SUSY LSP) was con-sidered as a single final state type. The charge conjugated partners of all final state particles and resonances (excluding jets, b-jets, Z and / E T ) were considered separately which adds to 9 separate final state categories and 5 different parameters for the resonances. After all decay trees are constructed, the weighted mean and standard deviation of the number and the maximal kinematically allowed energy of observed final states particles and resonances were also calculated (see section 4.2 for more details). The weights were the individual occurrence probability of each respective decay tree. Finally these quantities were averaged across all 2to-2 production processes weighted by their individual cross sections.
The aforementioned quantities complete the set of inputs in the reparametrized approach which adds up to a total of 56 parameters. The branching ratios were analytically calculated using SPheno-3.3.8 [19] and read in with PYSLHA.3.1.1 [69]. For strong processes NLL-Fast [70] was used for both 8 TeV and 13TeV since the cross sections are quickly obtained from interpolations on a pre-calculated grid. For electroweak production processes Prospino2.1 [71][72][73][74][75][76][77][78][79][80][81][82] was used. In the standard configuration, the cross section evaluation of Prospino requires the most computing time in our complete calculation. Consequently we only calculate the cross sections to leading order and we reduce the number of iterations performed by the VEGAS [83] numerical integration routine, resulting in an increase on the statistical uncertainty from < 1% to a few percent. Obviously the induced error slightly damages the mean accuracy of the reparametrized neural net. Nevertheless, the final accuracy of the reparametrized approach was comparable to the direct approach, as demonstrated below.
The total computation time of the reparametrization requires 3 − 11 seconds and the Prospino run time was the dominant factor despite the reduced integration accuracy. This was still a significant improvement over the O(hours) needed without a parametrization like SCYNet. However, compared to the order of millisecond evaluation of the direct approach this makes the reparametrized approach less attractive for applications in global fits.

Motivation and calculation of the phenomenological parameters
Almost all analyses that search for new physics define the final state multiplicity (this can also be a range) in the relevant signal regions. Consequently the mean number of final state particles aims to help the neural net to find areas in parameter space where individual signal regions dominate. In addition, many analyses differentiate themselves by a kinematical selection of an on-shell W , Z or t in the decay chain, in order to isolate and exclude certain SM backgrounds. For  Fig. 11: Flowchart of the reparametrization procedure.
this reason we have also included the mean number of resonant SM states in the decay chains as another parameter. When we examined the missing energy SUSY searches used in this study, we found that the most prevalent cuts were those based on the energy of the final state particles. This is also true of a missing energy cut which can be considered as the (vector) summation in the transverse plane of the energy of particles that go undetected. Consequently we sought to introduce a parameter that is correlated to the energy of the particles that would be observed. However we had to keep in mind that the aim of our study was to be able to calculate the LHC χ 2 in as short a time as possible and any calculation that consumes too much CPU, most importantly those which would require an integration over possible phase space configurations, disqualifies the reparametrization from being practical.
Since we already used the mean multiplicity of the various final state (and resonant) particles, the obvious choice would have been to calculate the mean energy for each of these states. However, for a cascade decay that may have included a number of 1 → 3 and even 1 → 4 decays and such a calculation requires a phase space integration, which unfortunately cannot be performed in the time frame available for each call to SCYNet. As a replacement, we instead chose to use the maximum kinematically allowed energy a particle can have as a weighted average over all production processes and possible decay chains.
In the center of mass frame, this maximum was calculated with the term, E max i gives the maximum energy of daughter particle i with mass m i in the decay k → i + { j} in the center-of-mass of the mother k. m k is the mass of the mother particle while m j represents the mass of the other daughter particle(s) j. This term is fast to calculate and the same for any number of daughter particles.
In the next step, the energy of i must be boosted from the center of mass frame of the mother k into the lab frame: Here, the boost γ k depends on the energy of the mother k as measured in the lab-frame which we have estimated at an earlier stage in the decay chain, γ k = E Lab k /m k ≈ E max, Lab k /m k . For the initial mother (heaviest) particle in the decay chain, we assumed that this was produced at rest and consequently, E max, Lab initial was fixed to m initial .
During averaging across all decay trees, not only the mean of the parameters but also the standard deviation was calculated and used as an input. The motivation for such a parameter can easily be understood if for example we consider two new physics scenarios, one which always produces a single lepton in the final state, while the other produces four leptons but only in 25% of decay chains. Both of these scenarios have a mean number of leptons equal to one but if we have a signal region that requires 4-leptons, clearly only the second scenario will satisfy such an analysis.
The set of inputs in the reparametrized approach added 56 parameters but it cannot be trivially assumed that the function mapping the presented parameters to χ 2 SN is injective, nor that they are uncorrelated. In a way, the parameters can be seen as a replacement for a first layer of the neural net preparing the inputs to facilitate the modelling process. Their viability should be evaluated based on the results they produce. In theory the reparametrization could be performed for any model, but in practice the cross sections and branching ratios must be obtainable in a timely manner which is why we have restricted ourselves to supersymmetric models where such tools were already available.

Architecture and training of the reparametrized neural network
For the reparametrized approach, we found that a deeper architecture was preferred with a first layer built from 500  Table 5: Mean errors on the LHC χ 2 prediction for both approaches in SCYNet.
hidden nodes followed by 8 layers of 200 hidden nodes. The added depth, creates additional challenges and required us to alter the hyperparameter setup that we used in the direct approach. Here, rectified linear units (relu) [84] were used as activation functions due to them being less prone to the vanishing gradient problem. Due to the larger architecture, overfitting was expected to play a larger role and to counteract this, the regularization term was increased by a factor of 100 to be λ = 0.001, see Eq. (1). In addition the batch size was reduced to 120 (32 for 13TeV) to improve convergence. 6 As in the direct approach, before starting the training, the inputs were again forced to undergo a Z-score normalization. Since the relu activation function is not restricted, the modification described in Appendix B was not necessary.
In contrast to the model parameters, the reparametrized inputs were heavily correlated and consequently the inputs were decorrelated by projecting the data into the eigenvectors of the covariance matrix. These mappings were calculated based on the training dataset and then stored to be able to apply the same mappings when evaluating the network. For the training itself, the nAdam [85][86][87] optimizer was used. Compared to the Adam optimizer used in the previous sections, the nAdam optimizer adds Nesterov momentum. 7 The learning rate was initialized a higher value of 0.01 for 8 TeV and 0.002 for 13 TeV. Similar to the batch size, the larger learning rate improved the convergence behaviour in the early stages of the training and was later removed since, as in the direct approach, the learning rate was reduced if the mean error has stopped improving.

Results of the reparametrized approach to LHC neural nets
During the calculation of the phenomenological parameters, cross sections and branching ratios were calculated. These quantities are related to the expected signal events in each signal region in a much more direct way than the model parameters. Consequently the target quantity of the parametrization, the χ 2 derived from the signal counts in each non-overlapping signal region, should be more directly  dependent on the input parameters of the reparametrized network. This hypothesis was supported by Figure 12 which showed that a nearest neighbor interpolator improves (in both the 8 TeV and 13 TeV case) when moving from the direct to the reparametrized approach. We interpreted this result as showing that the function mapping the reparameterized inputs to the outputs was flatter.
This improvement directly relates to the performance of neural nets which is given in Table 5 as the total mean error of all networks while the distribution of the mean error has been given in Figure 13. In total, the reparameterized approach on average displayed a lower error. Consequently, as we have shown in Table 4, the reparametrized approach outperformed the direct approach in several χ 2 ranges. This is offset by the significantly longer computation time required for each call of the reparametrized net in SCYNet. The dominant component of this computation is the time required by Prospino to calculate the electroweak cross sections. Additional research is now being performed to see if this calculation can be done by a separate neural network. That may allow to bring down the computation time as to be competitive with the direct approach.
The most useful upside of the reparametrized approach was that the used input parameters were chosen in a way which should make them (in principle) model independent. This could allow a network which was trained with one model to be used as prediction tool for a wide variety of models. In Figure 14 we have displayed the performance of the 13 TeV reparametrized network trained with the pMSSM-11 samples when fed with points from the cMSSM In each bin the mean difference was calculated for all validation points. and the AMSB. We emphasize here that due to Renormalization Group Equation (RGE) running, both the cMSSM and AMSB models are not subsets of the pMSSM-11 since in general all scalar masses become non-degenerate. In large regions of the parameter space, the network has predicted the model χ 2 directly calculated from CheckMATE correctly. However, along the transition region from clearly excluded (in the bottom of the frame) to clearly not excluded (in the top of the frame) a few regions of parameter space display discrepancies.
In particular a visible anomaly can be seen in the top left of Figure 14 for the AMSB and to a lesser degree the cMSSM. These wrongly predicted points arise because in the pMSSM-11 model, all the stop and the sbottom states (both left-and right-handed) were assumed to be mass degenerate. However, as already alluded to, RGE effects mean that in general this was not the case in the AMSB and cMSSM. In fact, the model points in the anomaly region exhibited a light gluino decaying exclusively into a stop-top pair, since the sbottoms are heavier than the gluino here. The stops then further decay, producing either another top quark and a neutralino or a b-quark and a chargino depending on the exact details of the mass spectra via, Further decays of the top/chargino always produce a Wboson, leading to two W -bosons per decay chain and four in the complete event. In the considered pMSSM-11 model however, one always additionally observes the gluino decay via an on-shell sbottom and these often do not decay via W -Bosons. The average number (and standard deviation) of resonant W -Bosons is part of the neural network inputs in the reparametrized approach. Since all decays were averaged during the reparametrization procedure, the pMSSM-11 samples will never contain such a large number of intermediate W bosons. Thus, when we apply the neural network to the AMSB or cMSSM, we no longer interpolate between known parameter points but instead extrapolate to regions of parameter space that the network has never sampled as they were not represented in the training data.
A simple solution to counteract the lack of training data that represents parameter points that only contain light gluinos and stops was to generate additional points in the pMSSM-19 [4]. For these points, the particles charged under SU(3) were forced to obey the following mass relation, in order to generate training data in the region the reparameterized network fails. The results of a network trained with the additional pMSSM-19 samples have been displayed in Table 6. The network performs slightly worse on the pMSSM-11 set which was to be expected because the net had to focus on regions of the parameter space which were not present in the pMSSM-11. On the other hand, the mean error of the network applied to the AMSB and cMSSM was significantly reduced. The effects of this can be observed in Figure 15 which has significantly reduced errors compared to Figure 14.
One may notice that anomalous areas still exist with larger errors. This was due to the fact that we did not sample the additional pMSSM-19 points with a high enough density  Table 6: Performance of the 13 TeV neural network in the reparametrized approach applied to models different from the pMSSM-11 used for training. for the neural network to learn the parameter space properly. The points with the worst reconstruction again contain a spectrum with a lighter stop and can be expected to be improved if the pMSSM-19 points were sampled correctly from the beginning.

Conclusions
We have developed a neural network regression approach, called SCYNet, for calculating the χ 2 of a given SUSY model from a large set of 8 TeV and 13 TeV LHC searches. Previously, such a χ 2 calculation would require computational intensive and time consuming Monte Carlo simulations. The SCYNet neural network regression, on the other hand, allows for a fast χ 2 evaluation and is thus well suited for global fits of SUSY models.
We have explored two different approaches: in the first, so-called direct, method we simply used the pMSSM-11 parameters as input to the neural network. Within this method the χ 2 -evaluation for an individual pMSSM-11 parameter point only takes few milliseconds and can thus be used for global pMSSM-11 fits without time penalty. However, the neural network based on the pMSSM-11 parameters cannot be used for any other model. In the second approach, we reparametrized the input to the neural network. Specifically, we used the SUSY masses, cross sections and branching ratios of the pMSSM-11 to estimate signature-based objects, such as particle energies and multiplicities, which provide a more model-independent input for the neural network regression. Although calculating the particle energies and multiplicities for any given parameter point requires O(seconds) of computational time, the reparametrized network trained on a particular model can in principle be applied to BSM scenarios the network has not encountered before.
The mean error of both neural network approaches lies in the range of ∆ χ 2 = 1.3 − 1.7, corresponding to a relative precision between 2% and 3%. This is already very close to the accuracy required for a global fit, where the profile likelihood equivalence between 1 sigma and ∆ χ 2 = 1 implies ∆ χ 2 < 1 as an appropriate goal in precision. For such a complex application with O(50) non-overlapping signal regions, the SCYNet implementation represents the most advanced neural network regression for the pMSSM-11 to date.
Applying the reparametrized pMSSM-11 neural network to the cMSSM and AMSB models, we found a few regions of parameter space where the network would fail to predict the correct χ 2 . As the cMSSM and AMSB models are not subsets of the pMSSM-11, there are cMSSM and AMSB parameter configurations where the network has to extrapolate to regions that were not represented in the training data. We have, however, demonstrated that such problems can be addressed systematically by additional specific training of the network.
Our results motivate the continuation and further improvement of the neural network regression approach to calculating the LHC χ 2 for BSM theories. A more accurate approximation to the true LHC likelihood can, for example, be obtained by modeling the probability density functions used for the event generation of the training sample to increase the sampling density in specific regions of parameter space. Furthermore, the treatment of systematic correlations between signal regions should be studied in more detail. The direct approach should be extended to other models, such as the pMSSM-19, so that the neural networks can be used in the corresponding global fits, and the reparameterized neural network regression should be studied further in the context of additional models.
With these improvements in mind, it is possible to decrease the mean error of the SCYNet neural network approach significantly below ∆ χ 2 ≈ −2 ln L < 1, thus making it a powerful tool for a large variety of global BSM fits.
In this section we explain how to calculate the profile likelihood ratio (PLR) for one SR which can be interpreted as the 'χ 2 ' value used in text.
For a given signal region, the following results are available -Number of predicted signal events N S , statistical and total systematic error on signal events σ stat N S , σ sys N S , number of predicted SM events N SM , summed over all background sources, statistical and total systematic error on SM events σ stat N SM , σ sys N SM and number of experimentally observed events N E .
At first one constructs the likelihood as follows (A.1) The first term originates from the Poisson distribution which describes the compatibility of observing N E events if λ are expected. Here, λ -which itself is a function of the parameters µ, ν S , ν SM explained below -is given as follows: To decide optimally between these two hypotheses, the Neyman-Pearson lemma [88] proposes the profile likelihood ratio Here, the constrained likelihood L C maximizes L with respect to the nuisance parameters for the null hypothesis with fixed µ = 1. In contrast, the global likelihood L G also varies µ to find the global maximum of L . Note that the range of allowed µ is not restricted here and both negative as well as values beyond unity are allowed. 9 According to Wilk's theorem [89], the variable q µ := −2 log PLR (A.10) is asymptotically χ 2 distributed in the limit of large N E . We can validate this statement for our setup in the simplified case of negligible uncertainties σ N S , σ N SM . Then, following the prescription above, one finds q µ = (N S + N SM − 8 A lognormally distributed variable is asymptotically Gaussian in the limit ∆ X X but forbids unphysical negative values for ∆ X/X = O(1). 9 Note that this is different to e.g. limit setting procedures where µ is restricted to values ≤ 1. N E ) 2 /(N S + N SM ) which is indeed the χ 2 distribution for one degree of freedom with observation N E and expectation N S + N SM .
We therefore refer the value of q µ whenever we use the expression 'χ 2 ' within this work.
Appendix B: Modified Z-score normalization The χ 2 s which are our targets are called y 1 , · · · y N ∈ R + (B.11) in this appendix. We apply the Z-score normalization to them Furthermore we definê y max := maxŷ i , y max := max y i (B.13) and finally we normalize again y i =ŷ î y max ∈ (−1, 1). (B.14) All targets are therefore in (-1,1). Usually one chooses µ = 1 N ∑ y i and σ = 1 N−1 ∑(yi − µ) 2 but this would cause problems with the back transformed outputs of the network. We want the back-transformed outputs of the network to be between the maximum z 1 = 100 and the minimum z 0 = minimum possible LHC χ 2 . With the choice µ = z 0 + y max 2 (B.15) z 0 corresponds to an output value ofŷ = −1. z 1 corresponds to an output value ofŷ = +1. Therefore we choose,